The regulatory landscape for AI in healthcare
Healthcare is the most heavily regulated industry in which AI is being deployed. Unlike other industries where AI adoption is largely a business decision, healthcare AI operates within a web of overlapping regulations: HIPAA governs data privacy, the FDA regulates clinical AI tools, CMS sets conditions for Medicare participation, the Joint Commission accredits hospitals, state medical boards govern clinical practice, and state attorneys general enforce consumer protection laws.
The good news: the administrative and operational AI covered in this course — documentation assistance, coding support, literature review, scheduling, compliance workflows — occupies a more clearly defined regulatory space than clinical AI. You are not building a diagnostic algorithm or a treatment recommendation engine. You are using AI to draft letters, summarise documents, extract data, and process paperwork.
The bad news: even administrative AI in healthcare must comply with HIPAA if it touches patient data, must align with your organisation's CMS conditions of participation, must not cross the line into clinical decision-making territory, and must be governed with the same rigour as any other tool that touches the patient record or the revenue cycle.
This module provides the compliance framework for deploying AI in healthcare operations. It is not legal advice — your compliance officer and legal counsel must be involved in every AI deployment decision. It is a structured guide to the questions you need to ask and the requirements you need to meet.
Who in your organisation would be responsible for approving AI tool deployment?
HIPAA compliance for AI tools — the complete framework
HIPAA compliance for AI tools is not optional, it is not a checklist you complete once, and it is not something your AI vendor handles for you. You — the covered entity or business associate — are responsible for ensuring HIPAA compliance in how you use AI tools with patient data.
Step 1: Determine if PHI is involved.
Not every AI workflow involves PHI. Literature review, policy drafting, communication templates, protocol analysis, and regulatory research do not require patient data. Start with these workflows — they require no HIPAA analysis.
For workflows that involve patient data, determine whether the data is PHI (contains any of the 18 HIPAA identifiers) or de-identified data (all 18 identifiers removed under Safe Harbor or Expert Determination methods).
Step 2: If PHI is involved, establish a BAA.
A Business Associate Agreement must be in place with the AI vendor before any PHI is processed. The BAA must address:
- Permitted uses and disclosures: What can the vendor do with the data? Processing only — no secondary use, no model training on your data, no aggregation with other clients' data.
- Safeguards: What technical, administrative, and physical safeguards does the vendor maintain? Encryption at rest and in transit, access controls, audit logging.
- Breach notification: What is the vendor's obligation if a breach occurs? HIPAA requires notification within 60 days, but your BAA should require shorter notification timelines (24-72 hours).
- Return or destruction of data: When the contract ends, what happens to the data? It should be returned or destroyed, with certification.
- Subcontractors: Does the AI vendor use subcontractors (cloud providers, infrastructure partners)? Each must also be bound by BAA terms.
Step 3: Implement minimum necessary standard.
HIPAA's minimum necessary standard requires that you limit the PHI disclosed to the minimum necessary for the intended purpose. If you are using AI to draft a prior authorisation letter, you need the relevant clinical information — not the patient's entire lifetime medical record. Structure your workflows to provide only the data the AI needs for each specific task.
Step 4: Audit and documentation.
Document every AI workflow that involves PHI: what data is used, what tool processes it, what the output is, who reviews it, and how the output is stored. This documentation is your evidence of compliance during an OCR (Office for Civil Rights) investigation or a breach review.
Which HIPAA requirement do you find most challenging to implement for AI tools?
FDA guidance on AI/ML in healthcare — where the line is
The FDA's regulatory framework for AI in healthcare is the single most important boundary for anyone deploying AI in a health system or pharma company. Understanding where this boundary falls — and staying on the correct side of it — is non-negotiable.
What the FDA regulates: AI/ML tools that are intended to diagnose disease, treat or prevent disease, or affect the structure or function of the body are medical devices subject to FDA oversight. This includes:
- Clinical decision support software that provides specific diagnostic recommendations
- AI algorithms that interpret medical images (radiology, pathology, dermatology)
- AI tools that recommend specific treatments or medication adjustments for individual patients
- Predictive algorithms that generate patient-specific clinical risk scores used for treatment decisions
What this course covers — and what the FDA generally does not regulate as medical devices:
The FDA has articulated criteria for Clinical Decision Support (CDS) software that is NOT a medical device. Under Section 3060 of the 21st Century Cures Act, CDS is exempt from FDA regulation if it meets ALL four criteria:
- It is not intended to acquire, process, or analyse a medical image, signal, or pattern
- It is intended for the purpose of displaying, analysing, or printing medical information
- It is intended for the purpose of supporting or providing recommendations to a healthcare professional
- The healthcare professional can independently review the basis for the recommendations
Administrative AI tools that draft documentation, suggest codes for coder review, process literature, and generate communication drafts generally fall outside FDA device regulation because they do not provide clinical diagnostic or treatment recommendations and the healthcare professional independently reviews all outputs.
However — the line can be blurry. An ICD-10 code suggestion tool is administrative. An AI tool that says "this patient likely has sepsis and you should start antibiotics" is clinical. A discharge summary drafter is administrative. An AI tool that recommends a specific post-discharge medication regimen is clinical.
The test: Is the AI output a clinical recommendation that could directly affect a specific patient's diagnosis or treatment? If yes, it is likely clinical AI subject to FDA oversight. If the AI is drafting, summarising, extracting, or processing documentation for human review, it is administrative.
When in doubt, consult your regulatory affairs team. The FDA is actively evolving its framework, and the line will continue to be refined through guidance documents and enforcement actions.
CMS conditions of participation and Joint Commission standards
If your organisation participates in Medicare (which is virtually every US hospital and most healthcare providers), you must comply with CMS Conditions of Participation (CoPs). These CoPs establish baseline requirements for medical records, patient rights, quality assessment, and governance — all of which are affected by AI deployment.
Medical records requirements (42 CFR 482.24):
- Medical records must be "accurately written, promptly completed, properly filed and retained, and accessible"
- Only "authorised individuals" may make entries in the medical record
- If AI generates content that is incorporated into the medical record (such as a drafted clinical note or discharge summary), the authorised individual (physician, NP, PA) must review, edit as needed, and authenticate the entry
- The medical record must reflect who authored the content — if AI generated a first draft that a physician reviewed and signed, your documentation should make this attribution clear per your organisation's policy
Patient rights implications:
- CMS CoPs require that patients are informed about their care. If AI-generated content is used in patient communications, the organisation should have a policy on transparency about AI use
- Patient access to their medical records (and under HIPAA, their right to amend) applies to AI-assisted documentation just as it does to manually created documentation
Joint Commission standards add an accreditation layer:
- The Joint Commission requires a documented process for evaluating new technology before deployment — AI tools should go through your organisation's technology assessment process
- Quality assessment and performance improvement (QAPI) requirements apply: you should be measuring the quality of AI-assisted documentation and comparing it to your baseline
- Sentinel event policies apply: if an AI-related error contributes to a sentinel event, your root cause analysis must include the AI workflow
Quality reporting implications:
If you are reporting HEDIS measures, CMS quality measures, or participating in MIPS/QPP, the accuracy of your clinical documentation directly affects your quality scores. AI-assisted documentation that improves specificity and completeness can improve quality measure performance — but only if the documentation is clinically accurate and reflects the patient's true condition.
How does your organisation currently evaluate new technology for clinical or operational use?
IRB documentation and clinical trial compliance
If your organisation conducts clinical research — whether as a sponsor, CRO, or investigative site — AI deployment in research workflows requires additional compliance considerations beyond HIPAA and FDA guidance.
Institutional Review Board (IRB) requirements:
The IRB protects human research subjects. Any use of AI in clinical trial workflows should be evaluated for IRB implications:
- Protocol documents: If AI assists in drafting or reviewing the protocol, this is a development tool — the IRB reviews the final protocol regardless of how it was created. No specific IRB notification is typically required for using AI as a drafting tool, but your organisation should have a policy.
- Informed consent forms: AI can draft informed consent language, but the content must be reviewed by the IRB and must meet all requirements of 21 CFR 50.25. If AI is used in any aspect of the trial that affects participants, this should be disclosed in the consent form.
- Adverse event processing: If AI is used to process or triage adverse event reports (as covered in Module 4), the IRB and sponsor should be aware that AI-assisted processing is part of the safety reporting workflow. The medical review remains human.
- Data handling: If AI tools process identifiable research subject data, this is a use of identifiable private information that may need to be addressed in the IRB protocol and the data use agreement.
Good Clinical Practice (GCP) considerations under ICH E6(R2):
- Source data verification: If AI generates or processes data that becomes part of the clinical trial record, source data verification processes must account for the AI step. Monitors and auditors need to be able to trace from the AI output back to the source data.
- Training documentation: GCP requires that all trial staff are qualified for their roles. If staff are using AI tools as part of their trial responsibilities, training on the AI tool should be documented.
- Audit trails: 21 CFR Part 11 (electronic records, electronic signatures) requirements apply to any AI-generated content that becomes part of the regulated record. The system must maintain a complete audit trail showing the original AI output, any modifications, and the final authenticated version.
Adverse event reporting obligations:
Whether AI processes ICSRs or not, the regulatory reporting obligations remain unchanged:
- Serious unexpected adverse reactions: 7 days for fatal/life-threatening, 15 days for all others (ICH E2A)
- Annual safety reports (DSUR) under ICH E2F
- Post-marketing periodic safety update reports (PSURs) under ICH E2C
- MedWatch reporting to FDA for serious adverse events
AI accelerates the processing but does not change the timeline obligations or the requirement for medical review of every case.
State-specific regulations and emerging AI governance frameworks
Beyond federal regulations, state-level requirements add complexity — particularly for health systems operating across multiple states:
State health data privacy laws: Several states have enacted health data privacy laws that go beyond HIPAA. Washington's My Health My Data Act, California's CCPA/CPRA with health data provisions, and Connecticut's health data privacy law all impose additional requirements. If you operate in these states, your AI data handling must comply with both HIPAA and state requirements — and where state law is more restrictive, state law governs.
State medical practice acts: Some state medical boards are beginning to address AI in clinical practice. If AI-generated documentation is used in patient care, ensure your organisation's policy aligns with your state medical board's position on AI in clinical documentation.
State prior authorisation reform: Multiple states have enacted prior authorisation reform legislation that imposes timelines, transparency requirements, and automation standards on health plans. If your organisation uses AI for prior auth workflows, ensure you are tracking the requirements in every state where you operate.
The administrative vs clinical AI distinction — reinforced:
This distinction bears repeating because it is the single most important compliance boundary in healthcare AI:
Administrative AI (what this course covers): Assists with documentation, coding, scheduling, literature review, data extraction, communication drafting, compliance workflows. Outputs are reviewed by humans before any action is taken. Generally not FDA-regulated as a medical device. Must comply with HIPAA if PHI is involved.
Clinical AI (not covered in this course): Provides diagnostic support, treatment recommendations, risk predictions used for clinical decisions, medical image interpretation. Subject to FDA device regulation. Requires extensive clinical validation. May need to be integrated into clinical decision support governance under CMS CoPs.
If you are ever uncertain which category a proposed AI use case falls into, apply this test: Would a reasonable patient expect that a licensed healthcare professional, not an AI system, made this determination? If yes, it is clinical AI. If the AI is handling paperwork, processing, and drafting, it is administrative.
Has your organisation developed an AI governance policy that addresses the administrative vs clinical AI distinction?
Module 6 — Final Assessment
Under HIPAA, what is required before PHI can be processed through an AI tool?
Under the 21st Century Cures Act, when is Clinical Decision Support software NOT regulated as an FDA medical device?
What is the test for determining whether an AI use case is 'administrative' or 'clinical'?
How do CMS Conditions of Participation affect AI-assisted clinical documentation?