Not all workflows are equal
You documented your top 3 workflows in the last module. Now the question is: which one do you tackle first?
Gut instinct is unreliable here. The workflow that frustrates you most may not be the one with the best return on investment. The one your MD cares about may not be the one that's easiest to automate. You need a scoring system.
The VALUE framework gives you one. Five dimensions, each scored 1-5, producing a total that lets you compare workflows on a level playing field.
The VALUE framework
V = Volume — How often does this workflow run?
A = Accuracy impact — What happens when this workflow produces errors?
L = Labour intensity — How many person-hours does it consume per cycle?
U = Urgency — How time-sensitive is the output?
E = Ease of automation — How document-heavy and structured is the workflow?
Each dimension is scored 1-5. A workflow scoring 25/25 is high-frequency, high-stakes, labour-intensive, time-sensitive, and highly structured — the ideal candidate for AI integration. A workflow scoring 8/25 might still benefit from AI, but it is not where you start.
V — Volume
How frequently does this workflow execute? A process that runs 50 times per month delivers 50x the value of one that runs once. High volume means that even modest per-cycle improvements compound into significant returns.
| Score | Frequency |
|---|---|
| 1 | Annually or less |
| 2 | Quarterly |
| 3 | Monthly |
| 4 | Weekly |
| 5 | Daily or multiple times per day |
Think about your top candidate workflow. How often does it run?
A — Accuracy impact
What happens when this workflow produces an error? A formatting mistake in an internal memo is an annoyance. A data error in a regulatory filing is a material risk. The higher the consequences of error, the more valuable it is to have AI provide a second layer of review — or to produce the output with greater consistency than manual work.
| Score | Impact of error |
|---|---|
| 1 | Cosmetic — minor inconvenience, easily caught |
| 2 | Operational — creates rework but no external impact |
| 3 | Reputational — visible to clients or counterparties |
| 4 | Financial — direct monetary impact or incorrect decisions |
| 5 | Regulatory — compliance breach, legal exposure, or audit findings |
What is the accuracy impact of your top candidate workflow?
L — Labour intensity
How many person-hours does this workflow consume per cycle? This is the most tangible dimension because it translates directly into recoverable time. A workflow consuming 40 person-hours per cycle that AI compresses by 60% returns 24 hours to higher-value work — per cycle.
| Score | Person-hours per cycle |
|---|---|
| 1 | Under 1 hour |
| 2 | 1-4 hours |
| 3 | 4-10 hours |
| 4 | 10-40 hours |
| 5 | 40+ hours |
How labour-intensive is your top candidate workflow per cycle?
U — Urgency
How time-sensitive is the output? A pitch book needed for tomorrow's client meeting is not the same as an internal research note with no hard deadline. Urgency matters because AI's speed advantage is most valuable when speed itself is valuable.
| Score | Time sensitivity |
|---|---|
| 1 | No deadline pressure — delivered when convenient |
| 2 | Soft deadlines — days to weeks of flexibility |
| 3 | Defined deadlines — specific due dates that matter |
| 4 | Tight turnaround — hours to respond, not days |
| 5 | Real-time or same-day — competitive advantage depends on speed |
How urgent is the output of your top candidate workflow?
E — Ease of automation
How document-heavy and structured is this workflow? AI excels at processing documents, extracting information, and producing structured outputs. It struggles with workflows that depend on tacit knowledge, relationship dynamics, or unstructured creative work.
| Score | Workflow characteristics |
|---|---|
| 1 | Highly unstructured — depends on tacit knowledge, relationships, or creative judgment |
| 2 | Semi-structured — some documented steps but heavy reliance on individual expertise |
| 3 | Moderately structured — clear process with defined inputs/outputs, some judgment required |
| 4 | Well-structured — document-heavy, rule-based steps, clear quality criteria |
| 5 | Highly structured — standardised inputs, clear rules, consistent output format |
How structured and document-heavy is your top candidate workflow?
Scoring your workflows
Now score all three of your candidate workflows. For each one, assign a 1-5 score on each VALUE dimension and calculate the total.
| Dimension | Workflow 1 | Workflow 2 | Workflow 3 |
|---|---|---|---|
| V — Volume | ___ | ___ | ___ |
| A — Accuracy impact | ___ | ___ | ___ |
| L — Labour intensity | ___ | ___ | ___ |
| U — Urgency | ___ | ___ | ___ |
| E — Ease of automation | ___ | ___ | ___ |
| Total | ___/25 | ___/25 | ___/25 |
Your highest-scoring workflow is your priority candidate. But look at the pattern too — a workflow scoring high on Volume and Labour but low on Ease may need more preparation (better documentation, more structured inputs) before AI integration is practical.
The prioritisation matrix
For a sharper view, plot your workflows on a 2x2 matrix:
- X-axis: Ease of automation (the E score)
- Y-axis: Impact (the sum of V + A + L + U)
This gives you four quadrants:
| Low Ease (1-2) | High Ease (3-5) | |
|---|---|---|
| High Impact (12-20) | Strategic bets — high payoff but requires investment in workflow restructuring | Quick wins — start here, highest ROI with lowest effort |
| Low Impact (4-11) | Deprioritise — low return and hard to implement | Efficiency gains — easy to implement but modest returns; good for building momentum |
Start in the top-right quadrant. High impact, high ease. This is where you prove the value of AI at your firm and build the credibility to tackle the strategic bets later.
Example — three common workflows scored
Here is how three common financial services workflows might score. Your numbers will differ, but this shows the framework in action.
DD Document Review (reviewing a 200-page CIM and extracting key terms, risks, and financials):
- V: 3 (monthly, deal-dependent) | A: 4 (errors affect investment decisions) | L: 5 (40+ hours) | U: 4 (deal timelines are tight) | E: 4 (document-heavy, structured output)
- Total: 20/25 — High impact, high ease. Strong first candidate.
Portfolio Monitoring Report (weekly summary of portfolio company performance):
- V: 4 (weekly) | A: 3 (visible to partners and LPs) | L: 3 (6-8 hours) | U: 3 (defined weekly deadline) | E: 4 (template-driven, data-heavy)
- Total: 17/25 — Solid candidate with high frequency compounding the value.
Pitch Book Generation (creating bespoke client presentations):
- V: 3 (monthly) | A: 3 (client-facing) | L: 4 (15-25 hours) | U: 4 (often urgent) | E: 2 (requires creative judgment, client-specific narrative)
- Total: 16/25 — Good score but low Ease means higher implementation complexity.
In this example, DD document review is the clear first priority — highest total score and high Ease. The pitch book scores well overall but sits in the "strategic bets" quadrant because of the creative judgment required.
Based on the VALUE scores above, a firm has capacity for one AI pilot. Which workflow should they prioritise?
Module 4 — Final Assessment
What does the 'V' in the VALUE framework stand for?
A workflow scores high on Impact (V+A+L+U = 16) but low on Ease (E = 1). Where does it fall on the prioritisation matrix?
Why should firms generally start with 'quick wins' rather than 'strategic bets'?
In the VALUE framework, which dimension measures what happens when a workflow produces errors?