You don't need to build the model — you need to direct it
This module is not a machine learning lecture. You will never need to train a neural network or write a line of code.
What you need is a working mental model — an understanding of what AI can do with your submissions, your claims files, and your policy forms. And equally important, an understanding of where it will confidently produce output that is wrong, incomplete, or dangerously misleading.
Think of AI the way you think about a new hire from a top university with an insurance degree: well-read, fast, and eager — but lacking the judgment that comes from years of reviewing losses, negotiating with brokers, and seeing how claims actually develop over time. They need your direction, your context, and your review.
Which is the best analogy for how you should think about AI in your insurance practice?
What an LLM actually does — in insurance terms
A large language model is a pattern recognition system trained on vast amounts of text. Imagine an analyst who has read every publicly available insurance policy form, underwriting manual, claims handling guide, actuarial textbook, regulatory bulletin, NAIC model law, ISO classification manual, and AM Best report ever published — and developed an extraordinarily sophisticated understanding of how insurance language works.
That is roughly what an LLM is. It does not "know" things the way you do from years of reviewing losses, negotiating treaty placements, or watching how IBNR develops on a long-tail casualty book. It recognises patterns and generates responses that follow those patterns.
Excellent at:
- Extraction — pulling key risk data from ACORD forms, supplemental applications, loss runs, and broker submissions
- Synthesis — combining information from a submission package, loss history, and underwriting guidelines into a coherent risk summary
- Comparison — identifying differences between policy forms, endorsements, or treaty wordings
- First drafts — well-structured declination letters, coverage analyses, claims status reports, and regulatory filing summaries
Unreliable at:
- Actuarial calculations — it is a language model, not a loss development triangle; always verify loss development factors, IBNR estimates, and rate calculations in your actuarial software
- Current market data — it cannot tell you today's rate environment in your ISO class, or the current reinsurance market pricing for your layers
- Factual claims without source documents — it can hallucinate ISO class codes, invent state regulatory requirements, or fabricate AM Best ratings
The context window — how many pages of a claims file can AI read?
The context window is how much information AI can hold in working memory during a single conversation. Think of it as the size of the claims file or submission package you can hand to your AI analyst before it starts working.
Current models handle approximately 200,000 tokens — roughly 500 pages of text in a single conversation.
| Context size | What it holds | Insurance use case |
|---|---|---|
| Small (~8K tokens) | 3-5 pages | Quick question about a single policy endorsement |
| Medium (~32K tokens) | 40-50 pages | Full personal lines application with supplemental forms |
| Large (~200K tokens) | ~500 pages | Complete commercial lines submission package with loss runs, financials, and engineering reports |
What this means for underwriting: You can upload an entire commercial property submission — the ACORD 125/140, Statement of Values, five years of loss runs, engineering report, and broker market commentary — and ask AI to extract every key risk characteristic, flag concerns in the loss history, and compare the submission against your appetite guidelines. In one conversation.
What this means for claims: You can upload a complete liability claims file — the FNOL report, police report, medical records, adjuster notes, expert reports, and coverage analysis — and ask AI to build a chronological timeline, identify inconsistencies, and flag potential subrogation opportunities.
Critical insight: The quality of AI output depends entirely on the quality of the documents you provide. Give AI the actual submission or claims file — do not ask it to work from memory or summarise "typical" commercial property risks. It works best when reading source material.
Insurance capabilities — what works today
These are not theoretical. These are workflows insurance professionals are using AI for right now.
Submission triage — Upload a broker submission package and get a structured risk summary: insured name, SIC/NAICS code, ISO class, locations, building construction, protection class, prior loss history, requested coverages and limits, and an initial assessment of whether the submission fits your underwriting appetite. What takes 45-90 minutes manually takes 5-10 minutes with AI.
Policy form comparison — Give AI two policy forms (say, an expiring policy and a proposed renewal, or your form versus a competitor's manuscript form) and it will produce a clause-by-clause comparison highlighting every material difference in coverage, conditions, and exclusions.
Loss run analysis — Upload five to ten years of loss runs across multiple carriers and AI will standardise the data, calculate loss ratios by year and by cause of loss, identify the largest losses, and flag development patterns that warrant attention.
Claims documentation review — Upload a claims file and AI will extract the key facts: date of loss, cause of loss, parties involved, injury details, coverage applicable, reserves set, and a chronological narrative of the claim's development.
Regulatory document review — Upload a state insurance department bulletin or circular letter and AI will summarise the key requirements, identify which lines of business are affected, and compare the requirements against your current practices.
Which of these tasks would AI be LEAST reliable at performing without human verification?
What AI cannot do — and what still requires human judgment
Knowing the limits prevents both over-reliance and under-utilisation.
AI cannot replace actuarial judgment. It can summarise an actuarial report, extract loss development factors from a triangle, and identify patterns in claims data. But it cannot determine whether a 5-year historical loss development pattern will hold in a changing legal environment, or whether a tail factor assumption is appropriate for an emerging risk class. Actuarial opinion requires actuarial expertise.
AI cannot predict catastrophes. It can analyse historical catastrophe loss data and summarise RMS or AIR model outputs, but it cannot tell you when the next Category 5 hurricane will make landfall or whether this year's wildfire season will be worse than last year. Catastrophe modelling requires specialised tools and expert interpretation.
AI does not know your book of business. It has no access to your internal underwriting data, your historical loss experience by class, or your current portfolio concentrations unless you provide this information. Your proprietary data advantage is yours — AI amplifies it but does not replace it.
AI makes confident-sounding errors. This is the most dangerous limitation. AI will never say "I am not sure about this ISO classification" or "I may have the wrong state regulatory requirement." It will present incorrect information with the same professional tone as correct information. A hallucinated surplus lines filing requirement or an invented loss development factor looks identical to a real one in the output. You must verify.
AI cannot assess credibility. It cannot tell you whether a claimant's statement rings true, whether a broker is overselling a risk's quality, or whether loss run data has been selectively presented. Human judgment on credibility remains essential.
Data privacy — protecting policyholder PII and PHI
Insurance data includes some of the most sensitive personal information in any industry — Social Security numbers from applications, protected health information from workers' compensation and disability claims, financial data from commercial submissions, and medical records from life and health underwriting.
Enterprise AI plans (Claude for Enterprise, ChatGPT Enterprise, Microsoft Copilot for Enterprise):
- Your data is not used to train the model. Full stop.
- Encrypted in transit and at rest
- Access controls and audit logs available
- Data residency options for regulated environments
- SOC 2 Type II certification
The real risk in insurance: An adjuster using a free personal AI account to summarise medical records from a workers' compensation claim. A junior underwriter pasting financial statements from a commercial submission into a consumer chatbot. Free-tier AI services may use your inputs for model training and lack enterprise-grade security.
Insurance-specific privacy requirements:
- HIPAA applies to health information in claims files, particularly workers' compensation, disability, and life claims with medical underwriting
- GLBA/Regulation P governs the privacy of personal financial information collected during the application and underwriting process
- State privacy laws (CCPA, state insurance data security model laws) impose additional requirements that vary by jurisdiction
- NAIC Insurance Data Security Model Law (adopted in over 20 states) requires carriers to maintain comprehensive information security programs
What you need to do:
- Use enterprise-tier AI tools for any workflow involving policyholder PII, PHI, or confidential commercial data
- Establish a data classification policy — what data can be processed through AI, what must be redacted first, and what is prohibited entirely
- Redact when possible — if you only need AI to review coverage terms, strip the policyholder's personal data before uploading
- Audit shadow usage immediately — assume that people across underwriting, claims, and compliance are already using free AI tools with sensitive data
A claims adjuster wants to use a free AI chatbot to summarise medical records from a workers' compensation claim. What should you do?
Key takeaways
- AI is a fast junior analyst, not a replacement for experienced underwriters, adjusters, or actuaries — it needs your direction and review.
- The context window lets you process entire submission packages, claims files, and policy form comparisons in a single conversation.
- AI excels at extraction, synthesis, comparison, and first drafts of insurance documents.
- AI is unreliable for actuarial calculations, catastrophe prediction, credibility assessment, and current market data.
- Data privacy is solvable — use enterprise-tier tools with proper HIPAA, GLBA, and state privacy law compliance for policyholder PII and PHI.
Next up: AI for Underwriting.
Module 2 — Final Assessment
What is the best analogy for an LLM's context window in an insurance context?
Which task should you NEVER trust an LLM to do without actuarial verification?
What is the primary data privacy risk when using AI in insurance operations?