The pilot-to-production chasm
The single biggest failure mode in enterprise AI is the pilot-to-production chasm. Only 25% of firms have moved more than 40% of their AI pilots into production use. The rest are stuck in an endless cycle of experimentation that never scales.
This module gives you the playbook for crossing that gap — systematically, without the false starts that characterise most enterprise AI programmes.
Has your firm successfully scaled an AI pilot into production use?
Why pilots fail to scale
Reason 1: No workflow redesign. The pilot proved AI could summarise documents faster. But nobody redesigned the DD workflow to take advantage of it. So the AI sits as an optional tool that some people use sometimes.
Fix: Every pilot should include a proposed workflow change, not just a tool demonstration.
Reason 2: No champion after the pilot. The pilot was led by an enthusiastic VP. They got promoted, moved to a new team, or got pulled onto a deal. The pilot results sit in a deck that nobody reads.
Fix: Identify a permanent owner for AI adoption. This does not need to be a full-time role, but it needs to be someone's explicit responsibility.
More reasons pilots stall
Reason 3: IT gatekeeping. The pilot ran on personal or team-level AI accounts. Scaling requires enterprise deployment, which means IT procurement, security review, and integration. This takes months if not started early.
Fix: Involve IT and compliance from day one of the pilot, not after it succeeds.
Reason 4: No measurement. "Everyone says it's great" is not evidence. Without before/after data on specific metrics, there is no business case for scaling.
Fix: Define metrics before the pilot starts. Measure them during. Present them after.
Which of these is the most common reason AI pilots fail to scale?
Data strategy for AI
Your AI is only as good as the data it can access. Most firms have data problems that limit AI effectiveness:
| Problem | Impact on AI | Solution |
|---|---|---|
| Data in silos | AI cannot access information across systems | MCP integration, data warehouse |
| Poor data quality | AI produces inaccurate analysis from inaccurate inputs | Data cleaning and validation processes |
| No data catalogue | Nobody knows what data exists or where | Inventory your data assets |
| Inconsistent formats | AI spends tokens on parsing instead of analysing | Standardise data formats |
| Access restrictions | AI cannot reach data it needs for analysis | Review and update access policies |
The good news: you do not need to solve all of these before deploying AI.
Connect, don't centralise
You do not need to build a massive data lake before deploying AI. The modern approach — enabled by MCP and similar integration standards — is to connect AI to data where it already lives.
This means your CRM stays in Salesforce, your documents stay in iManage or SharePoint, your financial data stays in your portfolio management system — and AI connects to each system as needed, with appropriate permissions.
This is faster to implement, lower risk, and does not require a multi-year data migration project.
| Data Source | AI Use Case | Priority |
|---|---|---|
| Document management | DD review, research synthesis | High |
| CRM | Deal pipeline, client context | High |
| Financial data feeds | Analysis, monitoring | High |
| Email/messaging | Context retrieval, communication drafting | Medium |
| Portfolio management | Monitoring, reporting | Medium |
| HR/talent systems | Resource planning | Lower |
What is the recommended approach to data strategy for AI?
Building an AI Centre of Excellence
For firms serious about scaling AI, a centre of excellence (CoE) provides structure without bureaucracy. A CoE handles:
- Governance: Maintains the AI use case classification, vendor relationships, and compliance framework
- Enablement: Runs training, maintains prompt libraries, supports teams in designing AI workflows
- Measurement: Tracks adoption metrics, time savings, and business impact across the firm
- Innovation: Evaluates new AI capabilities, runs experiments, and recommends new use cases
- Knowledge sharing: Facilitates cross-team learning — what worked for the DD team might apply to compliance
You do not need a large team. A small firm (under 100 people) needs one person with 25% allocation. A mid-size firm (100-500) needs 1-2 dedicated people. A large firm (500+) needs 3-5 people.
CoE boundaries
The CoE is an enablement function, not a control function. It does NOT:
- Build AI tools — that is IT/engineering
- Make investment decisions — that is the investment team
- Own compliance — that is the compliance function
- Replace team-level ownership — each team owns their own AI workflows
The CoE gives teams the training, tools, and governance they need to adopt AI effectively. It does not gatekeep or centralise AI usage.
What is the primary role of an AI Centre of Excellence?
Phase 1: Foundation (Days 1-30)
Here is a concrete, week-by-week plan for moving from "we should do something with AI" to "AI is integrated into our operations."
Week 1-2: Assessment
- Audit current AI usage (who is using what, where)
- Identify top 5 time-consuming, document-heavy workflows
- Interview team leads about pain points and opportunities
Week 3-4: Infrastructure
- Select and procure enterprise AI platform
- Engage IT for security review and deployment
- Establish basic governance framework (use case classification, data handling rules)
- Identify pilot team and workflow
Phase 2: Pilot (Days 31-60)
Week 5-6: Launch
- Train pilot team (half-day workshop)
- Set up shared AI project with relevant documents and instructions
- Begin using AI in the selected workflow
- Establish measurement baseline
Week 7-8: Iterate
- Weekly check-ins with pilot team
- Refine prompts and workflows based on what works
- Document issues and workarounds
- Collect quantitative data (time savings, output quality)
Phase 3: Scale (Days 61-90)
Week 9-10: Evaluate and decide
- Compile pilot results (before/after metrics, team feedback, example outputs)
- Present business case to leadership
- Get approval for expanded deployment
Week 11-12: Expand
- Onboard 2-3 additional teams
- Train new users using pilot team as mentors ("flight instructor" model)
- Establish prompt library from pilot learnings
- Set up ongoing measurement and reporting cadence
After 90 days, you should have an enterprise AI platform deployed with proper governance, 3-4 teams actively using AI in defined workflows, measurable time savings data, a prompt library and workflow documentation, and a clear plan for further expansion.
Module 11 — Knowledge Check
Most firms that run AI pilots fail to scale them to production. What is the primary reason?
What is the recommended approach to data strategy for AI?
What is the role of an AI Centre of Excellence?