You do not need to be a technologist
You are a legal professional, not an engineer. You do not need to understand neural network architecture any more than you need to understand how Westlaw's search index works in order to run an effective precedent search. What you need is a working mental model — enough understanding to know what AI can and cannot do, where it is reliable, and where it will get you in trouble.
This module gives you that mental model using the legal concepts you already know. No jargon from computer science. No abstract theory. Just the practical understanding you need to use AI effectively and ethically in your practice.
How would you describe your current understanding of how AI works?
Think of a context window as the stack of papers on your desk
When you give a document to an AI, it reads the entire thing and holds it in what is called a context window. This is the AI's working memory — the total amount of text it can consider at one time.
Think of it like this: you are a lawyer reviewing a contract. Your desk can hold a certain number of pages before you lose track of what is on page 3 while reading page 47. The context window is the AI's desk.
Modern AI models have context windows ranging from roughly 100,000 to over 1 million tokens. In practical terms:
- 100,000 tokens is approximately 200-250 pages of legal text — enough to hold an entire commercial lease, an M&A purchase agreement, or several standard NDAs simultaneously
- 200,000 tokens can hold a full set of transaction documents for a mid-size deal
- 1 million tokens can hold thousands of pages — enough for a substantial portion of a data room or a large set of discovery documents processed in batches
This matters because the AI can only reason about what is inside its context window at that moment. If you ask it to compare two contracts but only provide one, it cannot reference the other from memory the way you might recall a deal you worked on last month. It only knows what you give it, right now.
In your practice, what is the typical document length you would want AI to analyse in a single pass?
Capabilities: where AI adds genuine legal value
AI is not magic. It is pattern recognition and language generation at a scale and speed that humans cannot match. In legal work, this translates into specific, well-defined capabilities:
Contract analysis and comparison. AI can read a contract and identify specific provisions — indemnification clauses, limitation of liability, change of control, IP assignment, termination for convenience. It can compare those provisions against your firm's standard playbook and flag deviations. It can do this across hundreds of contracts in the time it would take a paralegal to review five.
Legal research assistance. AI can analyse a legal question, identify potentially relevant areas of law, and draft research memoranda. It can summarise case holdings, identify distinguishing facts, and map the development of a legal doctrine across a line of cases. It processes and synthesises information from vast bodies of legal text faster than any associate.
Document drafting and revision. AI can generate first drafts of standard legal documents — engagement letters, demand letters, contract clauses, discovery responses, deposition outlines — from structured instructions. It can revise documents to match a particular tone, jurisdiction, or standard. It can redline a document against a template and explain each deviation.
Information extraction and summarisation. AI can extract specific data points from large document sets — party names, dates, monetary amounts, governing law provisions, assignment restrictions — and present them in structured formats. It can summarise depositions, distil lengthy regulatory guidance into actionable points, and convert case law into issue-specific briefs.
Limitations: where AI fails in ways that matter for legal practice
Understanding AI limitations is not optional for lawyers. It is a professional obligation. Here are the limitations that are most dangerous in legal practice:
AI cannot exercise legal judgment. It can identify that a contract contains a non-standard indemnification clause. It cannot tell you whether that clause is acceptable for this particular client, in this particular deal, given the commercial relationship between the parties. Judgment — weighing competing interests, assessing risk tolerance, understanding the business context — remains exclusively human.
AI hallucinates citations. This is the single most dangerous limitation for legal professionals. AI will generate case citations that look perfectly formatted, include plausible party names, and cite real reporters — but the cases do not exist. The AI is not lying. It is generating text that follows the pattern of legal citations without verifying that each citation corresponds to an actual case. The Mata v. Avianca incident — where a lawyer submitted a brief containing AI-generated citations to non-existent cases — resulted in sanctions and became a cautionary tale for the entire profession. Module 5 covers verification workflows in detail.
AI does not understand privilege. AI cannot determine whether a document is privileged. It can flag documents that contain language patterns associated with attorney-client communications — but the legal determination of whether privilege attaches, whether it has been waived, or whether an exception applies requires lawyer judgment that AI cannot replicate.
AI cannot replace the client relationship. Clients hire lawyers for counsel — for the ability to understand their business, anticipate their concerns, and provide advice they trust. AI can make you a faster, more thorough lawyer. It cannot make you a trusted adviser. That remains a fundamentally human skill.
Which AI limitation concerns you most in the context of your practice?
Your ethical obligation to understand technology
ABA Model Rule 1.1 requires that lawyers provide competent representation. Comment 8 to Rule 1.1 was amended in 2012 to state explicitly that competent representation requires lawyers to "keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology."
This is not aspirational guidance. It is a professional obligation. At least 40 states have adopted this language or its equivalent. The duty of competence now includes a duty of technological competence.
What does this mean in practice? It means that a lawyer who refuses to understand AI — who neither uses it nor understands its capabilities and limitations — may be falling short of their ethical obligations. Not because they are required to use AI on every matter, but because they are required to understand it well enough to make informed decisions about when it should and should not be used.
Consider the converse: a lawyer who uses AI without understanding its limitations — who submits AI-generated research without verification, or who inputs client confidential information into an unsecured AI tool — is also falling short of competence.
The duty cuts both ways. You must understand AI well enough to use it when appropriate, and you must understand it well enough to know when it is inappropriate or dangerous.
Client confidentiality and AI: Model Rule 1.6
ABA Model Rule 1.6 requires lawyers to protect client confidential information. When you type a client's matter details into an AI tool, you are transmitting confidential information to a third-party system. This raises immediate questions:
Where does the data go? Consumer AI tools (free versions of ChatGPT, for example) may use your inputs to train future models. This means your client's contract terms, litigation strategy, or privileged communications could theoretically influence the AI's responses to other users. Enterprise versions of these tools typically offer data processing agreements that prohibit training on your data — but you must verify this.
Who can access it? Understand the AI vendor's data handling practices. Is data encrypted in transit and at rest? Who at the vendor can access your inputs? Are there geographic restrictions on where data is processed? For matters involving export-controlled information or data subject to GDPR, these questions are not hypothetical.
Does the engagement letter address AI use? Some firms are adding AI disclosure provisions to their engagement letters — informing clients that AI tools may be used in the performance of legal services, subject to the firm's data security protocols. Whether disclosure is required is an evolving question, but proactive disclosure builds trust and manages expectations.
The practical takeaway: before using any AI tool on client matters, verify that the tool's data handling meets your confidentiality obligations. Enterprise-grade legal AI tools with appropriate data processing agreements are not optional — they are an ethical requirement.
Does your firm or legal department have a policy on using AI tools with client data?
The mental model: AI as a very fast, very thorough, occasionally unreliable junior associate
Here is the simplest and most accurate mental model for AI in legal practice: think of it as a junior associate who reads at superhuman speed, never gets tired, never misses a clause in a 200-page agreement, follows instructions precisely — but who occasionally makes up case citations, cannot exercise judgment about client relationships, and has no understanding of whether a legal strategy is wise or merely defensible.
You would not let a junior associate file a brief without review. You would not let a junior associate make a privilege determination without supervision. You would not let a junior associate advise a client on litigation strategy without partner oversight.
The same rules apply to AI. It is a tool that operates under your supervision. Its output is your work product. You are responsible for its accuracy, its completeness, and its compliance with your ethical obligations.
The lawyers who will thrive with AI are the ones who understand this clearly: AI handles the information processing — the reading, the extracting, the comparing, the drafting. The lawyer handles the judgment — the advising, the strategising, the deciding, the counselling.
Module 2 — Final Assessment
What is the practical significance of an AI's context window for legal work?
Why is the hallucination of case citations particularly dangerous for lawyers?
What does ABA Model Rule 1.1, Comment 8 require regarding technology?
Before using an AI tool on a client matter, what must a lawyer verify under Model Rule 1.6?