Stop planning and start doing — but start in the right place
The biggest risk in insurance AI adoption is not starting too fast — it is getting stuck in an endless planning cycle. Carriers form AI committees, commission vendor evaluations, draft 50-page AI strategy documents, and 18 months later have nothing deployed. Meanwhile, their competitors have been using AI in production for a year.
This module gives you a concrete 30/60/90 day plan that you can begin executing this week. It is designed for a mid-sized to large P&C carrier, but the framework scales to any insurance organisation — from a regional MGA to a global reinsurer.
The most important decision: which workflow first?
Based on what you've learned in this course, which workflow should most carriers deploy AI for first?
Days 1-30: Foundation and first deployment
Week 1: Setup and governance
-
Day 1-2: Tool procurement. Select and procure an enterprise-tier AI platform. Key requirements for insurance:
- Enterprise data handling (no training on your data, encryption, access controls)
- HIPAA-compliant data processing (essential for claims workflows involving medical records)
- SOC 2 Type II certification
- Ability to upload documents (PDFs, images of ACORD forms, spreadsheets)
- Sufficient context window for large document packages (200K+ tokens preferred)
- Options: Claude for Enterprise, ChatGPT Enterprise, Microsoft Copilot for Enterprise. Evaluate based on document handling capability, pricing, and integration with your existing Microsoft or Google workspace.
-
Day 3-5: Data governance framework. Before anyone touches the tool, establish clear rules:
- Green tier (approved for AI processing): Policy forms, coverage analyses, rate filing narratives, market research, claims correspondence templates, regulatory bulletins, public financial data
- Yellow tier (approved with redaction): Submissions with policyholder names and addresses redacted, claims file summaries with claimant PII removed, loss runs with identifying information redacted
- Red tier (prohibited): Raw medical records with PHI, policyholder Social Security numbers, financial underwriting documents with account numbers, SIU investigation files, privileged legal communications
- Document this in a one-page policy. Do not make it 50 pages — nobody will read it. One page with clear green/yellow/red categories.
Week 2: Pilot team selection and baseline measurement
-
Day 6-8: Select your pilot team. Choose 3-5 underwriters (or 5-8 claims adjusters) in a single team. Pick a team with:
- A mix of experience levels (senior and junior)
- A manageable but representative volume of work
- At least one enthusiastic early adopter
- A supportive team lead or manager
-
Day 8-10: Measure the baseline. Track current performance for one week before introducing AI:
- Submissions triage: Time per submission, submissions reviewed per day, sources of delay
- Claims (if that's your pilot): FNOL-to-adjuster-contact time, time to build initial claims summary, cycle time to first payment
Week 3-4: Launch and initial iteration
-
Day 11-12: Build your prompt library. Adapt the prompt templates from Modules 3 and 4 of this course to your specific:
- Lines of business and ACORD form types
- Underwriting appetite guidelines (or claims handling procedures)
- Internal terminology and workflow conventions
- Create 5-10 templates covering the most common submission types (or claim types)
-
Day 13: Half-day training workshop for the pilot team:
- 60 minutes: How AI works, what it can and cannot do (use Module 2 material)
- 60 minutes: Live demo on real submissions or claims files from their book
- 60 minutes: Hands-on practice with prompt templates
- 30 minutes: Data governance rules and quality review process
-
Day 14-30: Pilot execution. The team uses AI daily on their actual workflow. Track metrics weekly. Hold a 30-minute check-in every Friday to discuss what's working, what isn't, and iterate on prompt templates.
Your pilot team is selected: 4 commercial property underwriters. What is the most important thing to do before they start using AI?
Days 31-60: Measure, expand, and build the case
Week 5-6: Pilot results and analysis
By day 30, you have 2-3 weeks of AI-assisted workflow data. Now compile the results:
Pilot Results Template:
UNDERWRITING SUBMISSION TRIAGE PILOT — RESULTS
Team: [X] commercial property underwriters
Duration: [X] weeks (baseline) + [X] weeks (AI-assisted)
THROUGHPUT
- Baseline: [X] submissions reviewed per underwriter per day
- AI-assisted: [X] submissions reviewed per underwriter per day
- Improvement: [X]% increase
TIME PER SUBMISSION
- Baseline: [X] minutes average
- AI-assisted: [X] minutes average (AI triage + human review)
- Reduction: [X]% decrease
QUALITY
- Submissions reviewed that were previously unreviewed: [X] per week
- AI triage accuracy (% of times underwriter agreed with AI's initial assessment): [X]%
- Issues caught by AI that underwriter might have missed: [list specific examples]
- Issues missed by AI that underwriter caught: [list specific examples]
TEAM FEEDBACK
- Overall satisfaction: [X/10]
- Top benefits cited: [list]
- Top concerns or limitations: [list]
EXTRAPOLATION
- If applied to the full underwriting team of [X] people: [projected throughput increase]
- Estimated additional premium opportunity: $[X] annuallyWeek 7-8: Second workflow deployment
With pilot results in hand, deploy AI to a second workflow. The natural second step depends on your first:
- If you started with underwriting triage: Add loss run analysis or referral prioritisation for the same underwriting team. They are already trained and comfortable with AI — adding adjacent workflows is fast.
- If you started with claims FNOL triage: Add claims document extraction (police reports, medical records, repair estimates) for the same claims team.
- Alternatively: Deploy the original workflow (submission triage or FNOL triage) to a second team or line of business. This tests whether the results replicate across different people and risk types.
Week 7-8: Parallel compliance deployment
While the primary workflow expands, start a parallel deployment for compliance:
- Regulatory bulletin summarisation (low-risk, high-value, requires no policyholder data)
- Rate filing narrative review
- Market conduct self-audit preparation
Compliance workflows are excellent parallel deployments because they involve no policyholder PII, produce immediate value, and build AI familiarity in the compliance team.
Days 61-90: Scale and institutionalise
Week 9-10: Leadership presentation and rollout approval
You now have 6-8 weeks of measured results from your primary workflow and 2-4 weeks from your second deployment. Prepare the leadership presentation:
AI DEPLOYMENT — RESULTS AND ROLLOUT PROPOSAL
EXECUTIVE SUMMARY
- [One sentence: what we did]
- [One sentence: what we measured]
- [One sentence: the key result]
- [One sentence: what we're proposing next]
PILOT RESULTS
[Use the results template above — specific numbers, not vague improvements]
ROI PROJECTION
- Pilot cost: $[X] (AI platform for pilot users)
- Pilot savings/revenue: $[X] (measured)
- Full deployment cost: $[X] (all proposed users)
- Projected annual impact: $[X]
- Combined ratio impact: [X] points on $[X] premium book
ROLLOUT PLAN
Phase 1 (complete): Submission triage for [team/LOB]
Phase 2 (current): [second workflow or second team]
Phase 3 (proposed): [expansion to additional LOBs and teams]
Phase 4 (proposed): [cross-functional deployment — underwriting + claims + compliance]
GOVERNANCE
- Data handling policy in place and working
- Quality review process established
- No data incidents during pilot
- Recommended enhancements for scale [if any]
RESOURCE REQUEST
- Enterprise AI licenses for [X] additional users: $[X]/month
- 0.5 FTE project coordination for Phase 3-4 rollout
- No additional technology infrastructure requiredWeek 11-12: Scaling to additional lines of business
With leadership approval, begin scaling the proven workflow to additional lines of business:
| Phase | Line of business | Team size | Timeline |
|---|---|---|---|
| Phase 1 (complete) | Commercial property | 4 underwriters | Weeks 3-8 |
| Phase 2 (complete) | Same team + loss run analysis | 4 underwriters | Weeks 7-10 |
| Phase 3 | Commercial auto + GL | 6 underwriters | Weeks 11-14 |
| Phase 4 | Workers' compensation | 4 underwriters | Weeks 15-18 |
| Phase 5 | Claims (FNOL + document extraction) | 8 adjusters | Weeks 15-18 |
| Phase 6 | Compliance (full deployment) | 3 analysts | Weeks 11-14 |
Key scaling principle: Each new line of business requires customised prompt templates (different ACORD forms, different appetite criteria, different loss characteristics) but the same fundamental workflow. Budget 1-2 days of prompt template development per new line of business.
You are presenting pilot results to your carrier's executive committee. The pilot showed a 40% increase in submission throughput and an estimated $3.2M annual premium opportunity. The CFO asks: 'What's the risk if the AI makes a mistake that leads to a bad underwriting decision?' How do you respond?
Tool selection for insurance carriers
The AI tool landscape is evolving rapidly. Rather than recommending specific products (which will be outdated), here is a framework for evaluating tools against insurance-specific requirements.
Tier 1: Enterprise AI platforms (required) These are the general-purpose AI platforms your underwriters, adjusters, and compliance analysts will use daily:
| Requirement | Why it matters for insurance |
|---|---|
| No training on your data | Policyholder PII and PHI must not be used to improve AI models |
| HIPAA BAA available | Required for claims workflows involving medical records |
| SOC 2 Type II | Baseline security certification for insurance regulatory expectations |
| Document upload (PDF, images) | ACORD forms, loss runs, and claims documents are PDFs and scanned images |
| 200K+ token context | Full submission packages and complex claims files exceed 100 pages |
| Enterprise admin controls | User management, access controls, audit logs |
| SSO integration | Required for enterprise deployment and access management |
Tier 2: Insurance-specific AI tools (evaluate at 90+ days) Once your team is proficient with general AI, evaluate specialised tools:
- Underwriting workbench AI — tools like Federato, Sixfold, or Planck that integrate AI directly into underwriting workflows
- Claims AI — tools like Shift Technology, FRISS, or Tractable that specialise in claims fraud detection, damage assessment, or claims automation
- Compliance AI — tools like Ascend or RegTech platforms that specialise in regulatory change monitoring
Tier 3: Custom AI integrations (evaluate at 6+ months) API-based integrations between AI platforms and your existing systems:
- Policy administration system integration (Guidewire, Duck Creek, Majesco)
- Claims management system integration
- Document management system integration
- Rating engine integration
Start with Tier 1. Do not wait for Tier 2 or Tier 3 integrations before deploying AI. A general-purpose enterprise AI platform used alongside your existing systems delivers immediate value. Deeper integrations come later as you scale.
Governance for policyholder data at enterprise scale
As AI deployment scales from a pilot team to the entire organisation, data governance must scale with it.
Governance framework for scaled deployment:
-
Data classification is the foundation. Every document type that flows through AI should be classified:
- Maintain the green/yellow/red classification from Day 1
- As new document types are introduced, classify them before they are processed through AI
- Annual review of classifications as regulations change
-
Access controls by role:
- Underwriters: access to submission processing, appetite matching, loss run analysis
- Claims adjusters: access to FNOL triage, document extraction, correspondence drafting
- Compliance: access to regulatory monitoring, filing review, complaint analysis
- Actuarial: access to loss development analysis, report review, benchmarking
- No one should have access to all capabilities — match AI access to job function
-
Audit trail requirements:
- Every AI interaction involving policyholder data should be logged
- Retain logs for the same period as the underlying insurance transaction records
- Regular audit of AI usage patterns to identify shadow usage or policy violations
-
Quality review process:
- All AI-generated output that will be shared externally (with brokers, policyholders, regulators, or reinsurers) must be reviewed by a qualified professional before distribution
- Establish a clear review workflow: AI generates draft, professional reviews and approves, then the document is sent
- Track AI accuracy over time by workflow (what percentage of AI output requires substantive correction?)
-
Incident response:
- If policyholder data is inadvertently processed through a non-approved AI tool, treat it as a data incident
- Follow your existing data breach notification procedures
- Document and report through your information security program as required by the NAIC Insurance Data Security Model Law
Key takeaways
- Start with submission triage (underwriting) or FNOL triage (claims) — these workflows are high-volume, document-heavy, and produce measurable results within weeks.
- Days 1-30: Procure an enterprise AI tool, establish data governance (one-page green/yellow/red policy), select a pilot team, measure the baseline, build prompt templates, train the team, and launch.
- Days 31-60: Compile pilot results with specific metrics, expand to a second workflow or team, deploy compliance workflows in parallel.
- Days 61-90: Present results to leadership with ROI projections, secure rollout approval, begin scaling to additional lines of business.
- Tool selection: Start with a Tier 1 enterprise AI platform. Evaluate insurance-specific tools and custom integrations after your team is proficient with the general platform.
- Data governance scales with deployment — classification, access controls, audit trails, quality review, and incident response must all grow as usage grows.
You now have everything you need to begin deploying AI at your insurance carrier. The technology is ready. The business case is clear. The question is no longer whether to adopt AI — it is how quickly you can execute.
Module 8 — Final Assessment
What is the recommended first AI workflow for most insurance carriers?
What must be established before a pilot team starts using AI with insurance documents?
You are presenting pilot results to the CFO. The pilot showed 40% throughput improvement but the CFO asks about the risk of AI errors affecting underwriting decisions. What is the best response?