A plan that starts Monday morning
You have a readiness report. It tells leadership where you stand, what to do, and what it costs. But even the best report is just a document until someone acts on it.
This module turns your report into a 30/60/90 day action plan — the operational backbone behind Section 5 of your readiness report. Whether leadership gives you a budget tomorrow or tells you to prove the concept on your own, you will know exactly what to do in the first week, the first month, and the first quarter.
The plan is structured around your readiness score from Module 2. Where your firm starts determines what the first 30 days look like.
What is the most common reason AI initiatives stall after the readiness assessment?
Days 1-30 — Foundation
The first month is about getting the prerequisites in place. No flashy AI demos. No firm-wide announcements. Just the groundwork that makes everything else possible.
Week 1-2: Procurement and governance
- Procure enterprise AI licences for the pilot team (5-10 people). Enterprise tier — not consumer accounts. This matters for data privacy, audit trails, and compliance.
- Draft a one-page AI acceptable use policy. Cover: what data can be used with AI, what requires human review, where outputs can and cannot go. Keep it simple. You can refine it later.
- Get sign-off from compliance on the use policy and the pilot scope.
Week 3-4: Team selection and baseline
- Identify the pilot team. Choose people who are motivated, not resistant. Forcing reluctant users into a pilot poisons the data.
- Select your #1 workflow (the highest VALUE score from Module 4).
- Measure the baseline: how long does this workflow take today? Track 5-10 instances. You need this data to prove improvement later.
- Set up the shared workspace: AI project with relevant documents, initial prompt templates, clear instructions.
Deliverable at Day 30: Licences active, governance in place, pilot team identified, baseline measured, workspace configured.
Days 31-60 — Pilot
This is where you run AI on real work and measure what happens. The goal is not to prove that AI is amazing. The goal is to collect honest data on what works, what doesn't, and what the actual time savings are.
Week 5-6: Guided usage
- Run the pilot team through the #1 workflow using AI. Walk alongside them for the first two to three instances. Answer questions in real time.
- Track time per instance — both the AI-assisted time and the human review time. Be precise. "About two hours" is not data. "2 hours 15 minutes" is.
- Collect qualitative feedback: What worked well? What was frustrating? What did the AI get wrong?
Week 7-8: Independent usage and measurement
- Step back and let the team work independently. This is the real test — does it work when you are not standing over their shoulder?
- Continue tracking time per instance. Compare to baseline.
- Document at least one "save" — something the AI caught that a human might have missed — and one "miss" — something the AI got wrong that required correction.
Deliverable at Day 60: Before/after time data, qualitative feedback, documented saves and misses, preliminary ROI calculation using actual (not estimated) numbers.
Why is it important to document AI 'misses' alongside 'saves' during the pilot?
Days 61-90 — Scale
You have pilot data. Real numbers, real feedback, real examples of AI-assisted work product. Now you scale — carefully.
Week 9-10: Present results and get expansion approval
- Update your readiness report with actual pilot data replacing the estimates.
- Present to leadership: "We ran a 4-week pilot. Here are the results." Use the five-minute format from Module 7.
- Request approval to expand to 2-3 additional teams or workflows.
Week 11-12: Expand and systematise
- Onboard the next teams. Use the pilot team members as mentors — they have real experience to share.
- Build out the prompt library based on what actually worked during the pilot. Not theoretical best practices — tested templates.
- Establish a feedback channel: how do teams report issues, share wins, and request new templates?
- Begin tracking firm-wide metrics: number of active users, hours saved per week, workflows covered.
Deliverable at Day 90: Updated readiness report with real data, expanded deployment to 2-3 teams, shared prompt library, feedback process, firm-wide tracking dashboard.
The flight instructor model
Do not run a firm-wide training session. It does not work. People sit through it, nod, and go back to doing things the old way.
Instead, use the flight instructor model: train 2-3 champions intensively, then they train others through hands-on pairing.
How it works:
-
Select 2-3 champions per team. These are the people who are already curious about AI, who volunteer for the pilot, who ask "what if we tried...?" They do not need to be the most senior — enthusiasm matters more than seniority.
-
Train them deeply. Not a one-hour overview. Spend real time — a full day or two focused sessions — on your firm's specific workflows, the prompt library, common pitfalls, and when to trust vs verify AI output.
-
Pair them with team members. Champions sit with colleagues and work through real tasks together. "Let me show you how I do this" is 10x more effective than a slide deck about how AI works.
-
Give them a channel. Champions need a way to share what they learn, ask each other questions, and escalate issues. A dedicated Slack channel or Teams group works.
Why this works in financial services: Your teams already learn through apprenticeship. A first-year analyst learns by sitting next to a third-year analyst. The flight instructor model uses the same dynamic — someone who knows shows someone who doesn't, on real work.
If leadership says no
It happens. Maybe the budget isn't there. Maybe compliance has concerns. Maybe the timing is wrong. Here is what to do.
Don't argue. Start small.
You do not need a budget to prove AI value. You need one workflow, one AI tool (most have free or low-cost individual tiers), and your own time.
- Use AI on your own work for 2-4 weeks. Track the time savings yourself. Document before-and-after examples.
- Share results informally. Show a colleague: "This DD first-pass used to take me 6 hours. I did it in 2 with AI. Look at the output." Peer-to-peer evidence is more persuasive than top-down presentations.
- Build a small coalition. Get 2-3 colleagues doing the same. Now you have data from multiple people on multiple tasks.
- Come back with evidence. "I've been using AI on my own work for a month. Here's what happened." This is a much stronger pitch than a theoretical readiness report.
Address the real objections:
- "It's not secure." → Enterprise tiers don't train on your data. Show them the security documentation.
- "We don't have budget." → Individual licences are $20/month. Ask if you can expense it as a professional development tool.
- "Our data is too sensitive." → Start with workflows that use public data: earnings transcripts, regulatory filings, market research.
- "We need to study this more." → "I studied it. Here are my results. Can I have six weeks and $2,000 to run a proper pilot?"
If leadership says yes
Congratulations. Now resist the urge to boil the ocean.
The most common failure mode after getting AI budget approval is trying to do everything at once. Leadership said yes. You are excited. You want to deploy across every team, integrate with every system, and transform every workflow. This path leads to a stalled project, blown budget, and a "we tried AI and it didn't work" narrative that takes years to overcome.
Instead, follow the plan:
- Start with one workflow. Your #1 VALUE-scored workflow. Not three. Not five. One.
- Start with one team. The team that is most motivated. Not the team that "needs it most."
- Show results in 30 days. Early results create momentum and justify continued investment. Delayed results create doubt.
- Expand based on evidence, not ambition. "We proved X on workflow A with team B — now we want to do the same with workflows C and D" is how successful AI programmes grow.
The 90-day trap: Many firms get approval for a 90-day programme and spend 60 days on procurement, governance, and planning. By Day 90, they have no results to show. Front-load the action. Get people using AI in Week 2, not Month 2.
You have budget approval and executive sponsorship. What is the biggest risk to your AI initiative now?
Self-assessment — your path forward
You are almost done. Before the final quiz, take a moment to identify the one thing most likely to stop you from acting on this plan.
What is the biggest obstacle between you and executing this 30/60/90 day plan?
What you have built
Over eight modules, you have produced a complete AI readiness package for your firm:
- Readiness Scorecard (Module 2) — A quantified assessment of where your firm stands across five dimensions.
- Workflow Inventory (Module 3) — A comprehensive list of your team's recurring, document-heavy, time-consuming workflows.
- VALUE Scores (Module 4) — A prioritised ranking of which workflows to automate first, based on volume, automability, labour cost, urgency, and error impact.
- Tool Category Matches (Module 5) — For each priority workflow, which category of AI tool you need.
- ROI Calculations (Module 6) — Conservative and optimistic projections with real (or estimated) numbers from your firm.
- One-Page Readiness Report (Module 7) — A decision document for leadership with five sections: current state, opportunities, tools, ROI, and next step.
- 30/60/90 Day Action Plan (Module 8) — The operational detail behind your recommendation, with contingencies for every leadership response.
This is not a theoretical exercise. If you filled in the templates and ran the numbers, you are holding a credible, actionable AI strategy. Most firms pay consultants $50,000-$150,000 for this deliverable. You built it yourself, grounded in your firm's actual workflows and data.
The only thing left is to act on it.
Want SupaMakers to run this audit for your firm?
You have done the self-assessment. If you want an expert-facilitated version — with interviews across your teams, benchmarking against peer firms, hands-on tool evaluation, and a presentation-ready report — SupaMakers runs AI Readiness Audits for financial services firms.
We work with hedge funds, investment banks, PE firms, and VC firms to:
- Conduct structured interviews with your teams to surface workflows you might miss
- Benchmark your readiness against firms of similar size and strategy
- Evaluate specific tools against your compliance and infrastructure requirements
- Produce a board-ready readiness report with implementation roadmap
- Stand up your first AI pilot and measure results
Get in touch to discuss a facilitated audit for your firm.
Final Course Quiz
What is the primary goal of the first 30 days in the action plan?
What is the 'flight instructor' model for AI training?
If leadership says no to your AI proposal, what should you do?
What is the biggest risk after getting budget approval for AI?