You do not need to become a data scientist
This module is not about turning you into a machine learning engineer. It is about giving you enough understanding of how AI works to make good decisions about where and how to use it in your ESG workflows.
Think of it this way: you do not need to understand how a car engine works to drive well. But you do need to know that a car needs fuel, cannot fly, and will not stop instantly on a wet road. The same applies to AI — you need to understand its fuel (data), its capabilities, and its limitations.
The good news is that every concept in this module maps directly to something you already deal with in ESG work.
How would you describe your current understanding of AI?
The context window — how much can AI see at once?
When you give an AI tool a document to analyse, it does not read the document the way you do — sequentially, building understanding over days. It processes the entire input at once within what is called a context window. Think of it as the AI's working desk: everything it can see and reason about at one time.
Here is why this matters for ESG teams:
A typical supplier sustainability questionnaire is 3-5 pages. A context window that can handle 100,000 tokens (roughly 75,000 words) could process 15 to 25 supplier questionnaires simultaneously. That means AI can read a batch of supplier responses, compare them against each other, identify outliers, and flag missing data — all in a single pass.
But context windows have limits. Your full CSRD report might be 200 pages. Your complete supplier database might be thousands of questionnaires. You cannot dump everything in at once. You need to structure your work into batches that fit within the window — just like you would organise a physical desk.
Understanding context window size helps you design workflows: Can you process all your Scope 3 supplier data in one pass, or do you need to batch it? Can you cross-reference your full CSRD and TCFD disclosures simultaneously, or do you need to compare section by section?
Capabilities — where AI excels in ESG workflows
AI is not magic, but it is genuinely good at several things that happen to be exactly what ESG teams need. Here are the core capabilities, mapped to ESG applications:
Data extraction from unstructured documents. AI can read a PDF supplier questionnaire and extract the specific metrics you need — energy consumption, waste volumes, water usage, workforce diversity figures — regardless of how the supplier formatted their response. This is the single most time-saving capability for most ESG teams.
Cross-referencing across frameworks. Give AI your emissions data and it can map it to CSRD/ESRS data points, TCFD recommended disclosures, SEC Climate Disclosure line items, and ISSB metrics simultaneously. It can tell you where the same data point needs to appear in different forms and flag where your disclosures are inconsistent.
Pattern recognition and anomaly detection. AI can scan three years of emissions data and flag that one facility's energy consumption dropped by 40% year-over-year — which might indicate a real improvement or a data collection error. It spots patterns that humans miss when working through thousands of rows.
Report generation and narrative drafting. Given structured data and guidelines, AI can produce first drafts of disclosure narratives. Not final drafts — but starting points that are 60-70% of the way there, saving your team hours of blank-page writing.
Unit conversion and standardisation. Your suppliers report energy in kWh, MJ, therms, and BTUs. AI handles these conversions automatically and consistently, eliminating one of the most common sources of error in ESG data.
Limitations — where AI falls short in ESG
Understanding limitations is more important than understanding capabilities. Misplaced trust in AI creates risk. Here is what AI cannot do for your ESG work:
It cannot verify physical emissions. AI can process the numbers your suppliers report, but it cannot independently confirm that those numbers reflect reality. If a supplier under-reports their Scope 1 emissions, AI will not catch it unless the numbers are implausible compared to industry benchmarks or the supplier's own historical data.
It cannot replace professional judgment on materiality. AI can help you identify which topics might be material, but the double materiality assessment required by CSRD ultimately requires human judgment about your specific business context, stakeholder expectations, and impact thresholds.
It needs source data to work with. AI cannot generate emissions data from nothing. If a supplier has not responded to your questionnaire, AI cannot fill in the gap. It can estimate using industry averages or proxy data — but you need to clearly flag those estimates as such, and they may not meet assurance requirements.
It can hallucinate — confidently generating plausible but incorrect information. AI might cite a regulation that does not exist, produce an emission factor that is wrong, or generate a disclosure narrative that misrepresents your actual performance. Every AI output in ESG reporting must be verified by a qualified human.
It does not inherently create an audit trail. Unless you deliberately design your workflow to capture inputs, prompts, and outputs, AI-assisted reporting can create a "black box" that auditors and assurance providers cannot inspect.
Which AI limitation do you think poses the greatest risk for your ESG team?
Data quality and garbage in, garbage out
In traditional ESG reporting, bad data is slow. A team member manually entering supplier data might notice that a reported emissions figure looks wrong and flag it. The process is slow, but the human in the loop catches some errors.
With AI, bad data is fast. AI will process a supplier questionnaire with obviously wrong units, internally inconsistent figures, or missing decimal points — and produce a clean-looking output that hides the underlying problems. The speed that makes AI valuable also makes it dangerous if your data quality controls are not robust.
This is not a reason to avoid AI. It is a reason to pair AI with explicit data quality checks:
- Validation rules: Set thresholds for plausibility. If a supplier's reported emissions are 10x higher or lower than their industry average, flag it for human review.
- Consistency checks: Compare current-year data against prior-year data for the same supplier. Significant changes should trigger review.
- Completeness checks: AI should report what percentage of required fields were present in each supplier response and flag gaps.
- Source traceability: Every AI-extracted data point should link back to the specific page and section of the source document.
The goal is not to eliminate human review — it is to focus human review where it matters most, while AI handles the high-volume, low-judgment processing.
Auditability — the non-negotiable requirement
CSRD requires limited assurance on sustainability disclosures, moving to reasonable assurance over time. The SEC's Climate Disclosure rules require disclosures that meet the same standards as financial reporting. This means your ESG data pipeline — including any AI-assisted steps — must be auditable.
Auditability means an external assurance provider can trace any disclosed number from the final report back through every transformation to the original source document. If AI was involved in extracting, converting, or calculating that number, the auditor needs to see:
- The source document — the original supplier questionnaire, utility bill, or data file
- The prompt or instruction given to the AI — what was it asked to extract?
- The raw AI output — exactly what the AI returned, before any human editing
- The human review decision — who reviewed it, when, and what changes were made
- The final disclosed figure — and how it connects to all of the above
If this sounds like a lot of documentation, it is. But it is the same documentation discipline that finance teams have maintained for decades around financial reporting. ESG reporting is simply catching up to the same standard.
The practical implication: when you build AI-assisted ESG workflows, build the audit trail first. Do not bolt it on later. Every module in this course will address auditability as a design requirement, not an afterthought.
Module 2 — Final Assessment
What is the best ESG analogy for an AI context window?
Why is AI hallucination particularly dangerous in ESG reporting?
What is the correct approach to data quality when using AI for ESG data processing?
What does auditability require in an AI-assisted ESG reporting workflow?