The governance gap
Here is a statistic that should concern every senior leader in financial services: 75% of firms plan to deploy AI agents, but only 21% report having mature AI governance in place.
That gap — between ambition and oversight — is where regulatory risk lives.
This module is not about slowing down AI adoption. It is about making sure your adoption is sustainable, defensible, and does not create the kind of exposure that keeps compliance officers awake at night.
What is the biggest risk created by the governance gap?
The regulatory landscape
Financial services regulators globally have converged on a consistent message: they are not banning AI, but they expect you to govern it.
Key regulatory developments:
| Regulation | Jurisdiction | Impact on Financial Services |
|---|---|---|
| EU AI Act | European Union | Classification of AI systems by risk level. High-risk applications (credit scoring, fraud detection) require conformity assessments, transparency, and human oversight. |
| SR-11-7 (Model Risk Management) | United States (Fed/OCC) | Existing model risk guidance is being applied to AI systems. Firms must validate, document, and monitor AI models used in decision-making. |
| FCA AI Update | United Kingdom | Principles-based approach. Firms must ensure AI outputs are explainable, fair, and auditable. Consumer Duty applies to AI-driven customer interactions. |
| MAS Guidelines | Singapore | Principles on Fairness, Ethics, Accountability, and Transparency (FEAT) for AI in financial services. |
You do not need to memorise these regulations. The common thread is this: regulators want to know that a human is accountable for every AI-influenced decision, and that you can explain how the AI contributed to that decision.
This is not fundamentally different from how you govern any other tool or process. The challenge is that AI adoption is happening faster than most governance frameworks can keep up.
Layer 1 — Use case classification
A practical governance framework for AI in financial services has four layers. The first is classification — not all AI use cases carry the same risk.
Low risk — Internal productivity (drafting emails, summarising meeting notes, formatting presentations). Minimal governance needed beyond basic data handling policies.
Medium risk — Analysis that informs decisions (earnings analysis, market research, comp tables). Requires human review before any output is acted upon.
High risk — Direct decision support (credit scoring, trade execution support, compliance monitoring). Requires formal validation, audit trails, and model risk management.
Prohibited — Fully automated decisions on client outcomes without human review (in most jurisdictions).
An analyst uses AI to summarise an earnings transcript before sharing key takeaways with the team. What risk level is this?
Layer 2 — Approval workflow
Before a team deploys a new AI use case, it should go through a lightweight approval process:
- What is the use case? — One paragraph description
- What data is involved? — Client data? Proprietary data? Public data?
- What is the risk level? — Using the classification above
- Who reviews the output? — Named individual accountable for AI-assisted decisions
- What is the fallback? — If the AI tool is unavailable, how does the work get done?
This does not need to be bureaucratic. For low-risk use cases, a team lead sign-off is sufficient. For high-risk use cases, compliance and risk should be involved.
Layer 3 — Data handling rules
| Data Type | AI Usage | Controls Required |
|---|---|---|
| Public market data | Any AI tool | Standard |
| Internal analysis | Enterprise AI only | No consumer tools |
| Client PII | Enterprise AI with data classification | Encryption, access logging, retention limits |
| Material non-public information (MNPI) | Restricted | Information barrier controls apply to AI as they do to email |
| Regulated data (KYC, AML) | Enterprise AI with audit trail | Full logging, explainability requirements |
The golden rule: AI tools should be subject to the same data handling rules as email and shared drives. If you would not email a document to a personal Gmail account, you should not paste it into a personal AI account.
A colleague pastes client PII into a free consumer AI chatbot to speed up a task. What is the problem?
Layer 4 — Monitoring and audit
For medium and high-risk use cases, maintain:
- Usage logs — who used AI for what, when (enterprise AI plans provide this automatically)
- Output sampling — periodic review of AI outputs for quality and accuracy
- Incident tracking — log cases where AI output was materially wrong or misleading
- Annual review — revisit governance framework against evolving regulations
These four layers — classification, approval, data handling, and monitoring — give you a complete, practical governance framework. It is lightweight enough to not slow teams down, but robust enough to satisfy regulators.
Vendor evaluation
When evaluating AI platforms for your firm, here is what matters.
Non-negotiable requirements:
- SOC 2 Type II compliance — standard for enterprise SaaS
- Data not used for training — your conversations and documents must not be used to improve the model
- Encryption — in transit and at rest
- Access controls — SSO, role-based permissions, admin dashboard
- Audit logs — exportable records of all AI usage
Important but negotiable:
- Data residency — can you choose where data is processed? Important for EU-based firms under GDPR.
- Custom retention policies — can you set how long conversations are stored?
- API access — for integration into your own systems
- On-premise deployment — some firms require this; cloud is acceptable for most with proper controls.
Questions to ask vendors:
- "If we upload a confidential prospectus, where is that data stored and for how long?"
- "Can you provide a data processing agreement compliant with [your jurisdiction]?"
- "What happens to our data if we terminate the contract?"
- "How do you handle a data breach involving our content?"
- "Can we get an independent security audit report?"
Responsible AI in practice
Beyond compliance, responsible AI use is about building trust — with your clients, your regulators, and your own team.
The human-in-the-loop principle: Every AI output that informs a client-facing decision should be reviewed by a qualified human. This is not just a regulatory requirement — it is common sense. AI is a powerful first draft, but the judgment, context, and accountability belong to you.
Bias awareness: AI models can reflect biases present in their training data. In financial services, this matters most in credit decisions, client risk scoring, and hiring and talent assessment. For these use cases, actively test for bias and document your testing process.
Transparency with clients: If AI contributed materially to work product delivered to a client, consider whether disclosure is appropriate. Regulatory guidance on this is evolving, but the trend is toward transparency.
Key takeaways
- Governance is not the enemy of speed — a lightweight framework actually accelerates adoption by removing ambiguity and giving teams confidence to move.
- Classify use cases by risk — not all AI usage needs the same level of oversight.
- Apply existing data rules — AI should be subject to the same policies as email and document management.
- Evaluate vendors rigorously — enterprise controls are non-negotiable for regulated firms.
- Keep humans accountable — AI assists decisions; humans make them.
With governance in place, you are ready for the Practitioner tier. Next: Claude as your AI analyst — how to use AI for document analysis, research, and financial modelling.
Module 4 — Knowledge Check
What is the biggest gap in enterprise AI adoption according to governance data?
What is the golden rule for AI data handling in financial services?
Which of these is NOT a non-negotiable requirement when evaluating an AI vendor for a financial services firm?