The documentation assembly line
Clinical documentation flows through a healthcare organisation like an assembly line: a physician documents an encounter, a CDI specialist reviews it for completeness, a coder assigns ICD-10 and CPT codes, a biller submits the claim, and when something goes wrong — a denial, an audit, a quality measure gap — someone goes back to the original documentation to figure out what happened.
Every step in this chain is text-heavy, time-consuming, and consequential. A vague clinical note leads to an unspecified ICD-10 code, which leads to a lower-weighted DRG, which leads to reduced reimbursement. A missing comorbidity in the documentation means a missed CC/MCC that changes the payment by thousands of dollars. A poorly drafted prior auth letter gets denied, triggering a 30-day appeal cycle while the patient waits for treatment.
AI does not replace any person in this chain. The physician still makes clinical decisions, the coder still validates codes, the biller still manages the revenue cycle. What AI does is accelerate each step: drafting the note, suggesting the codes, generating the query, writing the letter. The human shifts from creator to reviewer — and that shift can cut documentation time by 40-60%.
The critical principle for this entire module: AI assists documentation. Humans make clinical decisions. A physician reviews and signs every clinical note. A certified coder validates every ICD-10 code. A clinician approves every prior auth submission. AI generates drafts. Humans own the final product.
Which part of the documentation chain is the biggest bottleneck in your organisation?
Clinical note summarisation and discharge summary drafting
The highest-impact starting point for most health systems is AI-assisted clinical note summarisation. The workflow is straightforward: the AI reads the encounter documentation (or the full inpatient record for a discharge summary) and generates a structured summary for physician review.
Here is a prompt template for discharge summary drafting:
ROLE: You are a clinical documentation specialist preparing a discharge summary
from an inpatient medical record.
SOURCE DATA: The following is the complete inpatient record for a [length of stay]-day
hospitalisation, including admission H&P, daily progress notes, consultant notes,
operative reports (if applicable), lab results, imaging reports, medication
administration records, and nursing assessments.
[Paste or attach the inpatient record]
TASK: Draft a discharge summary with the following sections:
1. PATIENT DEMOGRAPHICS: Age, sex (do not include name or MRN)
2. ADMISSION DIAGNOSIS: As documented in the admission H&P
3. DISCHARGE DIAGNOSES: All active diagnoses at discharge, listed by clinical priority
4. HOSPITAL COURSE: Chronological narrative covering:
- Reason for admission and initial presentation
- Key diagnostic findings (labs, imaging, procedures)
- Treatment provided and clinical response
- Complications (if any)
- Consultant involvement and recommendations
5. PROCEDURES PERFORMED: List with dates
6. DISCHARGE MEDICATIONS: Complete list with dose, frequency, route
- Flag any NEW medications started during this hospitalisation
- Flag any CHANGED medications (dose or frequency changes)
- Flag any DISCONTINUED medications with reason if documented
7. DISCHARGE CONDITION: As documented
8. FOLLOW-UP INSTRUCTIONS: Appointments, activity restrictions, diet, wound care
9. PENDING RESULTS: Labs or studies with results still outstanding at discharge
CONSTRAINTS:
- Include only information that is documented in the source record
- Do not make clinical inferences or treatment recommendations
- If information for any section is not documented, state "Not documented in
available records" rather than omitting the section
- For each diagnosis, note the specific progress note date where it is documented
QUALITY FLAGS:
- Flag any medication discrepancy between the last progress note and the discharge
medication reconciliation
- Flag any follow-up appointment referenced in a note but not in the discharge instructions
- Flag any pending result that could change the discharge diagnosesThis prompt generates a structured first draft that a physician can review in 3-5 minutes rather than writing from scratch in 20-30 minutes. The quality flags are critical — they direct the physician's attention to the items most likely to need correction.
ICD-10 code suggestion and validation
ICD-10 coding is the translation layer between clinical documentation and the revenue cycle. There are over 72,000 ICD-10-CM diagnosis codes and over 78,000 ICD-10-PCS procedure codes. Assigning the right codes requires reading the clinical documentation, understanding the clinical context, and matching to the specificity requirements of ICD-10.
AI does not replace certified coders. It accelerates their work by pre-populating likely codes based on the documentation, flagging documentation that lacks the specificity needed for optimal coding, and identifying potential CC/MCC opportunities that might be missed on first pass.
ROLE: You are a certified coding specialist reviewing clinical documentation for
ICD-10-CM code assignment.
SOURCE DATA: The following is a clinical encounter note for a [inpatient/outpatient]
visit. Include the attending physician's note, any consultant notes, lab results,
and imaging reports for this encounter.
[Paste the clinical documentation]
TASK:
1. Identify all documentable diagnoses from the clinical record
2. For each diagnosis, suggest the most specific ICD-10-CM code supported by
the documentation
3. For each suggested code, cite the specific documentation that supports it
(note author, date, and relevant text)
4. Identify the principal diagnosis and sequencing rationale
5. Identify any CC/MCC diagnoses that are documented and could affect DRG assignment
6. Flag any diagnoses where the documentation lacks specificity for optimal coding:
- Laterality not specified
- Acuity not documented (acute vs chronic)
- Type not specified (e.g., "diabetes" without type 1 or type 2)
- Stage or severity not documented
- Causal relationship not established (e.g., "hypertension" and "CKD" documented
but causal relationship not stated)
OUTPUT FORMAT:
Table with columns: Diagnosis | Suggested ICD-10-CM | Supporting Documentation |
Specificity Level (Optimal / Needs Clarification) | CC/MCC Status | Notes
CONSTRAINTS:
- Only suggest codes that are directly supported by physician documentation
- Do not assign codes based on lab values alone — physician interpretation
must be documented
- Do not suggest codes where the documentation uses terms like "possible,"
"probable," or "rule out" for outpatient encounters (these are only codeable
in the inpatient setting per ICD-10-CM Official Guidelines)
- Flag any suggested code where you have less than high confidence in the matchThe distinction between inpatient and outpatient coding guidelines is critical in this prompt. In inpatient coding, conditions documented as "probable" or "suspected" at the time of discharge can be coded as if confirmed. In outpatient coding, they cannot — you code only confirmed diagnoses. AI needs to know which setting it is working in.
What is your organisation's current first-pass coding accuracy rate?
Clinical Documentation Improvement queries
Clinical Documentation Improvement (CDI) is the process of reviewing clinical documentation to ensure it accurately reflects the severity of illness and the resources used to treat the patient. CDI specialists review charts — typically concurrent with the hospitalisation — and generate queries when documentation is incomplete, ambiguous, or lacks the specificity needed for accurate coding.
The challenge: CDI query volume is enormous. A busy CDI programme at a 500-bed hospital might generate 3,000-5,000 queries per month. Each query must be specific, clinically relevant, and compliant with the ACDIS/AHIMA guidelines for compliant queries (non-leading, open-ended, and based on clinical indicators).
AI can help on two fronts:
Pre-screening documentation for query opportunities. Instead of a CDI specialist reading every chart from scratch, AI scans the documentation and flags charts where specificity is missing, CC/MCC opportunities exist, or clinical indicators suggest a diagnosis that is not documented.
Drafting compliant queries. Once an opportunity is identified, AI drafts the query following compliant query guidelines.
ROLE: You are a CDI specialist reviewing inpatient clinical documentation for
query opportunities.
SOURCE DATA: The following is the current inpatient record for a patient on
day [X] of hospitalisation. Include all progress notes, lab results, imaging,
and medication records to date.
[Paste the clinical documentation]
TASK:
1. Scan the documentation for the following query opportunity types:
a. SPECIFICITY: Diagnoses documented without required specificity
(laterality, type, acuity, stage, causal relationship)
b. CLINICAL INDICATORS: Lab values, vital signs, imaging findings, or
medication orders that suggest a clinical condition not yet documented
as a diagnosis (e.g., elevated lactate + tachycardia + documented
infection but "sepsis" not documented)
c. CC/MCC OPPORTUNITIES: Conditions referenced in nursing notes or
consultant notes but not captured as a diagnosis by the attending
d. CONFLICTING DOCUMENTATION: Different providers documenting different
diagnoses or severity levels for the same condition
2. For each query opportunity, draft a compliant query:
- Reference the specific clinical indicators from the record
- Use open-ended, non-leading language
- Do not suggest a specific diagnosis — ask the physician to clarify
- Follow this format:
"Based on the clinical indicators documented in [source note, date] —
specifically [clinical indicators] — please clarify the clinical
significance of these findings. Is there an underlying condition that
should be documented?"
CONSTRAINTS:
- Queries must be compliant with ACDIS/AHIMA guidelines
- Never use leading language that suggests a specific diagnosis
- Only generate queries where clinical indicators in the record support
the inquiry — never query based on absence of documentation alone
- Distinguish between concurrent review queries (during hospitalisation)
and retrospective review queries (post-discharge)A well-designed CDI query drafted by AI saves the CDI specialist 5-10 minutes per query while ensuring compliance with query guidelines. Across 3,000 queries per month, that represents 250-500 hours of CDI specialist time returned to higher-value review work.
Prior authorisation letter drafting
Prior authorisation letter drafting is one of the highest-ROI use cases for AI in healthcare operations. The workflow is predictable: assemble clinical documentation from the EHR, match it to the payer's medical necessity criteria, and write a letter that presents the clinical justification in the format the payer expects.
ROLE: You are a prior authorisation specialist drafting a medical necessity letter
for submission to a health plan.
SOURCE DATA:
- Patient clinical summary (de-identified or via HIPAA-compliant system):
[Diagnosis, treatment history, current clinical status, relevant lab/imaging results]
- Requested service: [Procedure, medication, or treatment requiring authorisation]
- Payer: [Health plan name]
- Payer medical necessity criteria: [Paste the payer's published criteria for
this service, if available]
TASK: Draft a prior authorisation letter that includes:
1. HEADER: Date, payer information, member information (placeholder), provider
information, requested service with CPT/HCPCS code
2. CLINICAL SUMMARY: Concise presentation of the patient's condition, diagnosis
(with ICD-10 code), and relevant clinical history
3. MEDICAL NECESSITY JUSTIFICATION:
- Map the patient's clinical data directly to the payer's medical necessity criteria
- For each criterion, cite the specific clinical evidence that satisfies it
- If the patient has failed prior treatments (step therapy), document each
prior treatment, duration, and outcome
4. SUPPORTING EVIDENCE: Reference relevant clinical guidelines (ACR Appropriateness
Criteria, NCCN Guidelines, AHA/ACC Guidelines, etc.) that support the
requested treatment
5. CONCLUSION: Clear statement of medical necessity with request for authorisation
CONSTRAINTS:
- Use only clinical information provided in the source data — do not fabricate
or infer clinical details
- If the source data does not address a specific payer criterion, flag it as
"ADDITIONAL DOCUMENTATION NEEDED: [specific criterion]"
- Do not include any language that could be interpreted as a clinical
recommendation — this letter documents and presents existing clinical decisions
- Format for the specific payer's submission requirements if knownThis template generates a submission-ready first draft in under a minute. The physician or PA specialist reviews, confirms the clinical accuracy, and submits. Compare that to the current process of 20-40 minutes of manual assembly per letter.
Appeal letter generation for denied claims
When a prior auth request or a claim is denied, the appeal process begins. Appeals are even more document-intensive than the initial submission because you need to address the specific reason for denial, provide additional clinical evidence, and often cite peer-reviewed literature or clinical guidelines that support your case.
Approximately 50-60% of appealed denials are overturned — which means the initial denial was often incorrect. The barrier is not the clinical merit of the case but the time and effort required to assemble a compelling appeal.
ROLE: You are a denials management specialist drafting a first-level appeal letter
for a denied claim or prior authorisation.
SOURCE DATA:
- Original prior auth letter or claim submission
- Denial letter with specific reason for denial
- Additional clinical documentation obtained since the original submission
- Patient clinical record (relevant sections)
- Applicable clinical guidelines and payer policy
TASK: Draft an appeal letter that includes:
1. REFERENCE: Original submission details, denial reference number, denial date
2. DENIAL REASON: Restate the payer's specific reason for denial
3. POINT-BY-POINT REBUTTAL:
- For each reason cited in the denial, provide a specific counter-argument
- Cite clinical documentation that directly addresses the denial reason
- If the denial cited "not medically necessary," map the clinical evidence
to the payer's own published medical necessity criteria
- If the denial cited "insufficient documentation," identify and present
the additional documentation
4. CLINICAL GUIDELINE SUPPORT: Reference current clinical practice guidelines
that support the medical necessity of the requested service:
- Guideline name, publishing organisation, year
- Specific recommendation relevant to this case
- Strength of evidence/recommendation grade if available
5. PEER-TO-PEER REQUEST: If applicable, request a peer-to-peer review between
the treating physician and the payer's medical director
6. REGULATORY CITATIONS: If the denial appears to violate state prompt-pay laws,
CMS coverage determination requirements, or parity laws, note the
applicable regulation
CONSTRAINTS:
- Address only the specific denial reasons stated in the denial letter
- Do not introduce new arguments unrelated to the stated denial reason
- Cite only real, verifiable clinical guidelines — do not fabricate references
- Flag any denial reason where the available clinical documentation is
genuinely insufficient — the honest assessment is more valuable than a
weak argumentA strong appeal letter drafted by AI and reviewed by the clinical team can be submitted within days rather than weeks. Given that the average denied claim represents $5,000-$15,000 in revenue, and overturn rates on appeal average 50-60%, the financial case for faster appeals is straightforward.
What is your organisation's current appeal rate for denied claims?
Medical record review for audits and quality measures
Healthcare organisations face a steady stream of audits and quality reporting requirements: CMS Recovery Audit Contractors (RACs), Medicare Administrative Contractors (MACs), Office of Inspector General (OIG) audits, commercial payer audits, HEDIS measure reporting, and internal compliance reviews.
Each of these requires reviewing medical records — sometimes hundreds or thousands of them — to verify that documentation supports the billed services, assigned codes, and reported quality measures.
AI accelerates this review process:
ROLE: You are a medical records auditor conducting a retrospective review for
[audit type: RAC / internal compliance / HEDIS / payer audit].
SOURCE DATA: The following is the complete medical record for encounter
[encounter type] on [date].
[Paste the clinical documentation]
TASK:
1. Verify that the documented diagnoses support the assigned ICD-10-CM codes:
[List the billed codes]
2. Verify that the documented procedures support the assigned CPT codes:
[List the billed procedure codes]
3. For each code, assess:
- Is the diagnosis/procedure clearly documented by the treating physician?
- Does the documentation meet the specificity requirements of the assigned code?
- Is the principal diagnosis sequencing correct per ICD-10-CM Official Guidelines?
- Are any CC/MCC codes supported by the documentation, or are they unsupported?
4. For HEDIS measure review (if applicable):
- Does the documentation demonstrate [specific HEDIS measure] compliance?
- Is the qualifying encounter, diagnosis, or intervention documented within
the measurement period?
- Are exclusion criteria documented if the measure appears non-compliant?
OUTPUT: Audit finding for each code:
- SUPPORTED: Documentation clearly supports the assigned code
- QUERY NEEDED: Documentation exists but lacks specificity — identify what is missing
- UNSUPPORTED: Documentation does not support the assigned code — cite the gap
- ADDITIONAL CODE OPPORTUNITY: Documentation supports a code that was not assigned
CONSTRAINTS:
- Apply ICD-10-CM Official Guidelines for Coding and Reporting consistently
- Do not apply clinical judgment — assess only whether documentation supports
the assigned codes
- Flag any code where the documentation is borderline — these are the charts
that need human reviewThis workflow converts a medical record audit from a manual chart-by-chart review into a focused review of AI-flagged findings. A coder or auditor who might review 15-20 charts per day manually can review AI-pre-screened findings for 40-60 charts in the same time.
Module 3 — Final Assessment
What is the critical principle governing all AI-assisted clinical documentation?
Why does the ICD-10 coding prompt distinguish between inpatient and outpatient settings?
What makes prior authorisation letter drafting a high-ROI use case for AI?
Why should AI-generated CDI queries follow ACDIS/AHIMA compliant query guidelines?