The ethical framework is taking shape — and you must know it
The legal profession regulates itself. Unlike most industries, where technology adoption is constrained primarily by cost and capability, lawyers face an additional layer: the professional responsibility rules that govern every aspect of practice. When you use AI on a client matter, you are not just making a business decision. You are making an ethical one.
The good news is that the regulatory framework for AI in legal practice is developing rapidly. The ABA, state bars, and ethics committees are issuing guidance. The bad news is that the guidance is scattered across dozens of sources, varies by jurisdiction, and is evolving. This module consolidates what you need to know.
How familiar are you with the current ethics guidance on AI in legal practice?
What the ABA has said about AI in legal practice
The American Bar Association has addressed AI through several channels: formal ethics opinions, resolutions, and committee reports. Here is what matters most:
ABA Formal Opinion 512 (2024) — Generative AI Tools. This is the ABA's most comprehensive statement on AI. The opinion addresses the use of generative AI by lawyers and applies the existing Model Rules framework to AI-specific scenarios. Key takeaways:
- Competence (Rule 1.1): Lawyers must understand AI well enough to use it competently. This includes understanding how generative AI works, what it can and cannot do, and the risk of inaccurate outputs including hallucinated citations.
- Confidentiality (Rule 1.6): Lawyers must ensure that AI tools adequately protect client confidential information. This requires understanding the tool's data handling practices: whether inputs are stored, whether they are used for training, and who can access them.
- Communication (Rule 1.4): Lawyers should consider whether to disclose AI use to clients. The opinion does not mandate blanket disclosure but notes that communication about AI use may be appropriate depending on the circumstances.
- Supervision (Rules 5.1 and 5.3): Partners and supervising lawyers have a duty to ensure that lawyers and non-lawyers under their supervision use AI appropriately. AI output is the supervising lawyer's responsibility.
- Fees (Rule 1.5): Fees must be reasonable. If AI reduces the time required for a task, billing the same number of hours as manual performance may be unreasonable. The opinion notes that the reasonableness of fees should reflect the actual work performed.
ABA Resolution 604 (2024) urged courts and practitioners to address the legal and ethical issues arising from AI, including transparency, accountability, and the need for human oversight of AI-generated legal work.
These are the floor, not the ceiling. State bars are adding jurisdiction-specific guidance on top of the ABA framework.
California, New York, and Florida: jurisdiction-specific rules
State bar guidance on AI varies significantly. The jurisdictions that have issued the most detailed guidance are worth studying in detail, even if you do not practise there, because they signal where the profession is heading.
California. The State Bar of California issued Practical Guidance on the Use of Generative AI in the Practice of Law, addressing several key areas. California emphasises the duty of competence as requiring lawyers to understand AI's capabilities and limitations. The guidance specifically addresses confidentiality, noting that lawyers must review the terms of service of AI tools to determine whether client data will be used for training or shared with third parties. California also addresses the billing question directly: if AI substantially reduces the time required for a task, a lawyer must consider whether the fee charged reflects the actual effort involved.
New York. The New York State Bar Association issued a report and guidelines for AI use. New York's guidance focuses on the duty of supervision — lawyers must review AI output before relying on it, just as they would review work produced by a junior associate. The guidance also addresses confidentiality with particular attention to large law firms where data governance across multiple AI tools creates complexity. New York has also seen judicial responses: several federal courts in the Southern and Eastern Districts have issued standing orders requiring attorneys to disclose when AI was used in preparing filings and to certify that all citations have been verified.
Florida. The Florida Bar issued an advisory ethics opinion addressing AI. Florida's guidance is notable for its emphasis on the duty of candour to the tribunal (Rule 3.3). The opinion states that a lawyer who submits AI-generated content to a court without verification may violate the duty of candour if the content contains inaccurate statements or fabricated citations. Florida also addresses the unauthorised practice of law, noting that AI tools marketed directly to consumers for legal tasks may implicate UPL concerns.
Other jurisdictions. Texas, New Jersey, and the District of Columbia have also issued guidance or are developing it. The pattern is consistent: the existing Rules of Professional Conduct apply to AI use, and the duties of competence, confidentiality, supervision, and communication are the framework through which AI ethics questions are analysed.
In which jurisdiction(s) do you primarily practise?
Protecting client data when using AI tools
The confidentiality obligation under Model Rule 1.6 is the most practically consequential ethics rule for AI use. Every time you input information about a client matter into an AI tool, you are potentially transmitting confidential information to a third party. The analysis is straightforward but must be rigorous:
Consumer AI tools. Free or consumer versions of AI tools (ChatGPT Free, free-tier Claude, Google Gemini consumer) typically use your inputs to improve their models. Their terms of service may permit storage, processing, and use of your inputs for training purposes. Using these tools with client confidential information almost certainly violates Rule 1.6 unless the client has given informed consent — which, as a practical matter, they have not.
Enterprise AI tools. Enterprise versions of AI tools (ChatGPT Enterprise, Claude for Business, Microsoft Copilot with enterprise data protection) typically offer data processing agreements that prohibit using your inputs for training and provide contractual confidentiality protections. These are the minimum requirement for using AI on client matters.
Legal-specific AI tools. Purpose-built legal AI platforms (Harvey, CoCounsel, Luminance) are designed for confidential legal work. They offer enhanced data protection, law-firm-specific security certifications, and contractual terms tailored to legal confidentiality requirements. These provide the strongest protection but must still be evaluated individually.
The practical checklist:
- Does the tool's data processing agreement prohibit using your inputs for model training?
- Is data encrypted in transit and at rest?
- Where is data processed and stored? (Relevant for cross-border matters and data residency requirements.)
- Who at the vendor can access your inputs? Under what circumstances?
- What is the data retention policy? Can you request deletion?
- Does the tool meet your firm's information security standards?
- Has your firm's general counsel or ethics committee approved the tool?
If you cannot answer these questions affirmatively, do not use the tool on client matters.
When must you tell clients — and courts — that AI was used?
The disclosure question is one of the most debated topics in legal AI ethics. The answer depends on context.
Disclosure to clients. ABA Formal Opinion 512 does not mandate blanket disclosure of AI use to clients. However, Rule 1.4 (Communication) requires lawyers to keep clients reasonably informed about the means by which their matter is being handled. If a client asks whether AI was used, you must answer truthfully. And proactive disclosure may be advisable in several situations:
- When the engagement letter specifies the methods to be used
- When the client's data governance or security requirements restrict the use of third-party tools
- When AI use materially changes the staffing, cost, or approach to the matter
- When the client is in a regulated industry with its own AI governance requirements
Many firms are adding AI disclosure provisions to their standard engagement letters: "In the performance of legal services, the Firm may use artificial intelligence tools subject to the Firm's data security protocols and attorney supervision requirements." This proactive approach builds trust and avoids surprises.
Disclosure to courts. An increasing number of courts are requiring disclosure. Standing orders in the Southern District of New York, the Northern District of Texas, and other federal courts require attorneys to certify whether AI was used in preparing filings and to confirm that all citations have been verified by a human. State courts are beginning to follow. Even where no standing order exists, Rule 3.3 (Candour to the Tribunal) requires truthful responses if the court asks.
Disclosure to opposing counsel. Generally, there is no obligation to disclose AI use to opposing counsel. Your work methods are your own. However, if an AI-generated document is produced in discovery, you should ensure it is properly reviewed for privilege and work product protection before production.
Does your firm or department currently disclose AI use to clients?
The duty of supervision when AI produces work product
Rules 5.1 (Responsibilities of Partners, Managers, and Supervisory Lawyers) and 5.3 (Responsibilities Regarding Nonlawyer Assistants) impose a duty of supervision. When AI produces work product, the supervising lawyer is responsible for its accuracy and compliance with professional obligations.
This has several practical implications:
AI output is your work product. When you sign a brief, file a motion, or send a client memorandum that was drafted with AI assistance, it is your work product. You are responsible for every statement, every citation, every legal conclusion. "The AI wrote it" is not a defence to a sanctions motion or a malpractice claim.
Review standards must be defined. Firms should establish clear protocols for what constitutes adequate review of AI-generated work product. At minimum: verify all citations, confirm the accuracy of factual statements, check the legal analysis against your own understanding, and ensure the output is appropriate for the specific matter and client.
Delegation to junior lawyers requires oversight. If a senior associate directs a junior associate to use AI for research or drafting, the senior associate must ensure the junior associate understands the verification obligations. "I told them to use AI" does not satisfy the supervisory obligation if the junior associate did not verify the output.
Malpractice exposure. AI errors that result in harm to clients — fabricated citations that lead to sanctions, missed contract provisions that cause deal losses, inaccurate regulatory analysis that leads to compliance failures — create malpractice exposure. Malpractice insurers are beginning to ask about AI use in applications, and some are adding AI-related exclusions or conditions. Firms should consult with their malpractice carriers about AI use policies and ensure their coverage addresses AI-related risks.
The fundamental principle is simple: AI is a tool. Tools do not have professional responsibility obligations. You do.
Technical safeguards for legal AI deployment
Beyond the ethical rules, practical data security is essential. Client matter data is among the most sensitive information any organisation handles. The security framework for AI tools used on client matters should address:
Access controls. Who can access the AI tool? Are access permissions tied to matter assignments? Can a lawyer working on one matter inadvertently access AI outputs from another matter? Enterprise legal AI tools should support matter-level access controls.
Audit trails. Can you demonstrate what data was input into the AI, what output was generated, and who reviewed it? Audit trails serve both compliance and malpractice defence purposes. If a client or court questions your AI use, you need records.
Data residency. For international matters, data residency matters. Client data processed by an AI tool hosted in the United States may create issues for matters governed by GDPR, data localisation laws in India or China, or other jurisdictional data protection requirements.
Ethical walls. Law firms use ethical walls (also called Chinese walls or information barriers) to prevent conflicts of interest when the firm represents adverse parties on different matters. AI tools must respect these walls. If the AI has access to the firm's full document repository, a lawyer behind an ethical wall could inadvertently access information from the walled-off matter through AI-generated responses.
Incident response. If the AI vendor experiences a data breach, what is the notification process? How will you notify affected clients? Does your firm's incident response plan address AI-specific data breach scenarios?
Module 6 — Final Assessment
Under ABA Formal Opinion 512, what does the duty of competence (Rule 1.1) require regarding AI?
Why are consumer AI tools (free ChatGPT, free-tier AI) generally inappropriate for client matters?
How are courts responding to AI use in legal filings?
If an AI-generated research memo contains a fabricated citation that is included in a court filing, who is responsible?