Stop planning. Start doing. But do it in the right order.
You have now covered the full landscape: why AI matters for legal, how it works, what it can do for contract review, legal research, client operations, and compliance. You understand the ethics. You know how to build the business case. The only thing left is to do it.
This module gives you a 30/60/90 day plan — not a theoretical framework, but a sequence of specific actions organised by phase. The plan is designed for a law firm or legal department that is starting from minimal AI adoption and wants to move methodically. If your organisation is further along, skip to the phase that matches your current position.
Where is your firm or department on the AI adoption spectrum?
Days 1-30: Ethics, tool selection, and pilot team formation
The first 30 days are about doing the work that makes everything else possible. Resist the temptation to skip ahead to tool deployment. Every firm that has failed at AI adoption skipped this phase.
Week 1-2: Ethics and compliance review.
Before any AI tool touches client data, complete the ethics framework:
- Review your jurisdiction's current guidance on AI in legal practice (Module 6). Document the specific rules and opinions that apply.
- Draft an AI acceptable use policy for the firm. At minimum, this should cover: approved tools, prohibited uses, confidentiality requirements, verification obligations, supervision requirements, and disclosure protocols.
- Have the policy reviewed by your ethics partner or committee. Get formal approval.
- Review your professional liability insurance coverage. Contact your carrier to understand whether AI-assisted work product is covered and whether any policy conditions apply.
- Draft AI disclosure language for engagement letters. Even if you choose not to mandate disclosure, having the language ready avoids delay later.
Week 2-3: Tool selection.
Evaluate AI tools against legal-specific criteria:
- Data security: Does the tool meet enterprise security standards? SOC 2 Type II certification? Data processing agreement that prohibits training on your inputs? Encryption in transit and at rest?
- Confidentiality protections: Does the tool's architecture support matter-level isolation? Can you enforce ethical walls? What is the vendor's data retention and deletion policy?
- Legal-specific capabilities: Is the tool built for legal workflows (contract review, research, document analysis) or is it a general-purpose tool? Legal-specific tools typically perform better on legal tasks.
- Integration: Does the tool integrate with your document management system, practice management software, and e-discovery platform? Integration reduces friction and increases adoption.
- Cost structure: Per-user, per-query, or flat fee? Model the cost against your expected usage volume. Most legal AI tools cost $50-$200 per user per month at enterprise scale.
Week 3-4: Pilot team formation.
- Select one practice group for the pilot. Choose based on: highest volume of the target workflow, most supportive practice group leader, and clearest ROI case.
- Recruit 5-10 volunteer participants across seniority levels.
- Establish baseline measurements for the pilot workflow (from Module 7).
- Schedule training for the pilot team (covered in the next phase).
Which workflow to start with: the decision framework
The choice of first workflow depends on your practice mix. Here is a decision framework:
If your firm is primarily transactional (M&A, corporate, finance): start with contract review.
Contract review is the ideal first workflow for transactional practices because:
- The task is structured and measurable (number of contracts, time per contract, deviations identified)
- The firm already has playbooks and templates to use as comparison standards
- The output is directly visible to clients (faster turnaround, more consistent analysis)
- The ROI is easy to calculate and demonstrate
Specific starting point: inbound NDA review. NDAs are high-volume, relatively standardised, and low-risk for a pilot. Success with NDAs builds confidence for more complex contract types.
If your firm is primarily litigation: start with research memo drafting.
Research memo drafting is the ideal first workflow for litigation practices because:
- Associates spend significant time on research and drafting
- The quality of AI-assisted memos can be directly compared to manual memos
- The verification workflow (Module 4) is a critical skill for the entire practice
- Research memos are internal work product — lower risk than client-facing documents
Specific starting point: research memos on well-established areas of law where verification is straightforward. Do not start with cutting-edge or unsettled legal questions where the hallucination risk is highest.
If you are an in-house legal department: start with contract review for procurement.
In-house departments review enormous volumes of vendor contracts. Starting with procurement contracts — NDAs, SaaS agreements, consulting agreements — produces immediate time savings that the business experiences directly as faster contract turnaround.
Based on your practice, which first workflow makes the most sense?
Days 31-60: Training, deploying, and measuring
The second 30 days are about executing the pilot with discipline and measurement.
Week 5-6: Training — associates versus partners.
Associates and partners have different AI training needs. Do not combine them in the same session.
Associate training (4-6 hours over two sessions):
- Session 1 (2-3 hours): AI fundamentals (Module 2), prompt engineering for legal tasks, hands-on practice with the firm's AI tool using sample documents. Focus on: how to write effective prompts, how to structure inputs for contract review or research tasks, and the verification workflow.
- Session 2 (2-3 hours): Workflow-specific training using the pilot workflow. Associates practise the full workflow end-to-end: input preparation, AI prompting, output review, verification, and final work product delivery. Use real (non-confidential) matter materials.
Partner training (2 hours, one session):
- Partners do not need prompt engineering skills. They need to understand: what AI can and cannot do, how to review AI-assisted work product, what the supervision obligations are, and how to communicate with clients about AI use. Focus on the ethical framework (Module 6) and the business case (Module 7), with a brief demonstration of the AI tool in action.
Week 6-8: Pilot deployment.
- Deploy the AI tool to the pilot team with the approved workflow.
- Require dual-track work for the first two weeks: each participant completes the task both manually and with AI assistance for the first 5-10 tasks. This produces direct comparison data and helps participants calibrate their trust in the tool.
- After the dual-track period, participants use AI-assisted workflow as their primary method, with manual review as the verification layer.
- Hold weekly 30-minute check-ins with the pilot team to identify issues, share best practices, and refine prompts.
Week 8-9: Mid-pilot assessment.
- Compile data: time per task (baseline vs. AI-assisted), quality metrics, participant feedback.
- Identify any problems: Are there contract types or research questions where AI performs poorly? Are there workflow bottlenecks? Are there ethics concerns that need to be addressed?
- Adjust the workflow and prompts based on findings.
Days 61-90: Pilot results, expansion plan, and firm-wide integration
The final 30 days are about converting pilot results into a firm-wide strategy.
Week 10-11: Compile pilot results.
Your pilot report should include:
- Time savings: Average time per task before and after AI assistance. Calculate the percentage reduction and the total hours saved during the pilot.
- Quality impact: Did AI-assisted work product have fewer errors, more comprehensive coverage, or more consistent analysis than manual work? Document specific examples.
- Cost savings: Calculate the dollar value of time saved at the relevant billing or cost rates. Project annual savings for the pilot workflow alone, then extrapolate across the firm.
- Participant feedback: Satisfaction scores, workflow improvements suggested, concerns raised.
- Ethics compliance: Any issues that arose during the pilot and how they were resolved. Confirmation that the ethics framework worked in practice.
- Recommendation: Clear recommendation for next steps — expand to additional practice groups, additional workflows, or both.
Week 11-12: Present to leadership and plan expansion.
Present the pilot report to the managing partner, executive committee, or general counsel. Use the data, not vendor claims. Then outline the expansion plan:
Phase 2 practice groups: Identify the next 2-3 practice groups for AI deployment. Prioritise based on: workflow volume, practice group leader willingness, and alignment with the firm's strategic priorities.
Phase 2 workflows: For each new practice group, identify the highest-value workflow. The pilot has given you a methodology — apply it to each new group's most document-intensive, time-consuming, or error-prone workflow.
Training plan: Scale the training programme. The pilot participants become the trainers for the next wave — peer learning is more effective and credible than external training.
Policy updates: Update the AI acceptable use policy based on pilot learnings. Add any new approved tools, refine workflow protocols, and document the verification standards that worked in practice.
What does success look like for AI adoption at your organisation in 12 months?
From one practice group to the firm: the scaling playbook
Scaling AI across a law firm is not a technology problem. It is a change management problem. The technology works. The question is whether the organisation adopts it.
The champion model. In every practice group, identify one to two AI champions — lawyers who are enthusiastic, technically comfortable, and respected by their peers. These champions become the point of contact for questions, the curators of best prompts and workflows, and the advocates for adoption within their group. Peer influence is more powerful than top-down mandates.
The prompt library. As lawyers develop effective prompts for specific workflows, collect them into a firm-wide prompt library. Organise by practice group and workflow: "M&A Due Diligence — Change of Control Extraction," "Litigation Research — Summary Judgment Standard," "Employment — Non-Compete Enforceability Analysis." A shared prompt library means no one starts from scratch.
The review standard. Standardise what "adequate review of AI output" means across the firm. Document the verification steps for each workflow type: citation verification for research, provision verification for contract review, fact verification for deposition summaries. Make the standard clear enough that a malpractice insurer would be comfortable with it.
The feedback loop. Create a mechanism for reporting AI failures — hallucinated citations, missed provisions, incorrect analysis. Track these failures centrally. Use them to refine prompts, update the acceptable use policy, and identify workflow types where AI is not yet reliable enough for production use.
The metric system. Track adoption and impact at the firm level: percentage of lawyers using AI regularly, average time savings per workflow, matter profitability under alternative fee arrangements, and client satisfaction scores. Report these metrics quarterly to leadership. What gets measured gets managed.
Module 8 — Final Assessment
What should be completed BEFORE any AI tool is deployed on client matters?
Why should associate and partner AI training be conducted separately?
What is the most important element when scaling AI from one practice group to the firm?
Why is the prompt library a critical scaling tool?