You don't need to build the engine to drive the car
This module is not a computer science lecture. You do not need to understand neural networks or GPU architecture.
What you do need is a working mental model — an understanding of what AI is good at, what it is bad at, and why.
Which is the best analogy for your relationship with AI?
What an LLM actually does
An LLM is a pattern recognition system trained on enormous amounts of text. Imagine a quant analyst who has read every public filing, research report, news article, and textbook ever published — and developed an extraordinarily refined sense of how language works.
That is roughly what an LLM is. It does not "know" things the way a human does. It recognises patterns and generates responses that follow those patterns.
Excellent at:
- Synthesis — combining information from multiple sources
- Pattern recognition — identifying themes and anomalies in text
- First drafts — well-structured memos, analyses, presentations
- Structured extraction — pulling specific data from unstructured documents
Unreliable at:
- Current information (training data has a cutoff date)
- Precise calculation (it is a language model, not a calculator)
- Factual claims without source material ("hallucinations")
The context window — your key advantage
The context window is how much information AI can hold in working memory at once. Think of it as the stack of documents you can hand to your AI analyst.
Current models handle ~200,000 tokens — roughly 500 pages in a single conversation.
| Size | What it holds | Use case |
|---|---|---|
| Small (~8K) | A few pages | Quick Q&A |
| Medium (~32K) | A short report | Single earnings call |
| Large (~200K) | ~500 pages | Full prospectus review |
Key insight: The quality of AI output is directly proportional to the quality of context you provide. Give it the actual document — don't ask it to work from memory.
What to trust and what to verify
High confidence — Summarising long documents, comparing contracts, drafting routine communications, extracting structured data.
Medium confidence — Analysis and interpretation, market research synthesis, financial statement analysis. Useful but verify the reasoning.
Low confidence — Numerical calculations, predictions, legal advice. Use Excel for numbers. The judgment is yours.
Your team used AI to analyse earnings transcripts. The AI flagged 15% revenue growth guidance. Should you trust this?
Data privacy — the straightforward answer
Enterprise AI plans (Claude for Enterprise, ChatGPT Enterprise):
- Your data is not used to train the model. Full stop.
- Encrypted in transit and at rest
- Access controls and audit logs
- Data residency options for regulated environments
The real risk: Someone at your firm using a free personal AI account with client data. Free-tier services may use inputs for training and lack enterprise controls.
AI data privacy is a solved problem at the enterprise tier. The risk comes from uncontrolled shadow usage — which happens when firms are slow to provide sanctioned alternatives.
Module 2 — Final Assessment
What is the best analogy for an LLM's context window?
Which task should you NOT trust an LLM to do reliably?
What is the primary data privacy concern with AI in financial services?