Webcast

Upcoming

Ethically Building AI & Automation Tools

Streams live on

Intermediate

Overview

We practice in an environment where urgency, vulnerability, and procedural complexity make workflow mistakes especially costly. As AI tools and ordinary automations become more integrated into intake, client communication, document handling, and case management, the ethics question is no longer limited to whether a tool generated accurate text. It now includes whether a system moved information, triggered actions, or shaped legal work before a lawyer exercised meaningful judgment.

Questions this course will attempt to address include:

  1. How should we distinguish between low-risk assistance tools and higher-risk systems that route, sequence, or act?
  2. Where do AI and automation create exposure in intake, multilingual communication, status updates, and document-heavy workflows?
  3. What does meaningful human supervision look like when review, approval, escalation, and audit functions occur at different stages of a workflow?
  4. Which tasks should never be automated to the point of client-facing action without advance lawyer review?

This course emphasizes the shift from chatbot-style concerns to broader system design, delegated action, supervision architecture, and operational controls in legal practice and examines how traditional professional responsibility duties-competence, confidentiality, supervision, communication, and more-apply in a practice environment increasingly shaped by AI-assisted and automated systems. Using specific examples, the program explores the difference between output risk and action risk, the limits of rubber-stamp human review, the need for approval matrices and least-privilege permissions, and the importance of audit trails, workflow maps, vendor diligence, and incident response planning. The course concludes with a practical framework for building governable systems that preserve legal judgment while still allowing firms to benefit from technology. 


Learning Objectives: 

  1. Identify how AI and automation change ethics risk in immigration practice by moving from simple output generation to workflow routing, action, and delegated decision support

  2. Distinguish among manual, rules-based, AI-assisted, semi-autonomous, and agentic workflows using a practical risk-classification framework

  3. Evaluate where meaningful human control must occur in intake, communications, document processing, and client-facing systems

  4. Recognize high-risk automation scenarios involving confidentiality, misrouting, silent failure, translation, deadline pressure, and overreliance on system-generated communications

  5. Implement practical controls, including approval matrices, permission limits, workflow mapping, audit logging, vendor review, and incident response procedures, to support competent and ethical use of technology in immigration practice


Credits

Gain access to this course, and unlimited access to 2,000+ courses, with a Plus subscription.

Explore Lawline Subscriptions