This article teaches the three externality categories — carbon, water, and social — and the measurement methods that produce defensible estimates. The estimates enter the Sustainability-Adjusted Value (SAV) framework from Article 34 and the board-grade reporting of Article 35. Externality accounting is technically demanding; the article is honest about where the methods are mature and where they remain contested.
Carbon accounting for AI
AI workloads consume electricity, which in most grids produces greenhouse gas emissions. Two primary components require accounting: training and inference.
Training carbon
Training a large model consumes significant compute over weeks or months. The carbon footprint depends on the compute hours, the hardware’s energy efficiency, the data center’s Power Usage Effectiveness (PUE), and the grid’s carbon intensity at the training location.
The canonical peer-reviewed reference is Strubell, Ganesh, and McCallum’s 2019 ACL paper “Energy and Policy Considerations for Deep Learning in NLP,” which estimated carbon emissions for several NLP training runs and demonstrated that neural architecture search in particular could emit several hundred tonnes of CO2.1 Patterson et al.’s 2022 IEEE Computer paper provided a more balanced follow-on, arguing that training emissions will plateau and then decline as efficiency gains outpace scale growth.2
For an AI value practitioner, training-carbon accounting typically uses published estimates rather than first-principles computation:
Training CO2e ≈ GPU-hours × average GPU power draw × PUE × grid carbon intensity
Mature cloud providers now report region-specific carbon intensity for their data centers; open-source libraries like CodeCarbon, MLCO2 Impact Calculator, and the Green Software Foundation’s measurement tools wrap the computation. Stanford HAI’s AI Index Report includes training-carbon estimates for frontier models in its annual compute-cost trendline.3
Inference carbon
For most enterprise AI features, inference carbon exceeds training carbon over the feature’s lifetime — training happens once, inference happens millions of times. Inference carbon per request depends on the model size, the hardware tier, the context window, and the grid carbon intensity at the inference location.
For managed provider inference (OpenAI, Anthropic, Google, etc.), per-request emissions must typically be estimated from public figures — providers publish partial data. For self-hosted inference, the computation is direct: measured GPU-hours per request × power × PUE × grid intensity.
An accounting discipline: report inference carbon at the workload level (total CO2e per month for the feature) and at the unit level (grams CO2e per request). Both figures matter — workload figures drive organizational totals; unit figures drive comparisons across architectural choices.
Water accounting
Data centers use water for cooling; AI workloads intensify water use. The OECD AI Policy Observatory publishes measurement guidance for AI water consumption.4 The accounting is typically per data center and scaled to the workload’s share of data-center compute.
Water consumption is measured two ways. Water Usage Effectiveness (WUE) measures water used per kWh of IT energy. Water Consumption Effectiveness (WCE) measures total water consumed (withdrawn plus evaporated) per kWh. Data-center operators publish WUE; WCE is less commonly reported but more policy-relevant.
For AI value reporting, water accounting typically uses cloud-provider disclosures where available (Google publishes per-region water figures; AWS and Azure provide partial disclosure) and defaults to industry averages where specific figures are unavailable. Water accounting is less mature than carbon accounting; uncertainty bands are wider.
Social externalities
Social externalities are the human impacts of AI deployment that fall outside the direct buyer-seller relationship. Three categories most commonly require accounting.
Category 1 — Employment displacement
An AI feature that reduces headcount creates employment effects. Net displacement is often less than gross (workers redeployed rather than laid off), but the net figure is not always positive either. Honest accounting tracks both gross and net, along with the quality of redeployment (comparable role? reduced compensation? involuntary separation?).
Stanford HAI and MIT Sloan research have documented patterns of AI-driven workforce change over the past three years; the patterns vary substantially by industry and role.5 Accounting for displacement in a single organization requires distinguishing AI-driven changes from changes driven by other factors (market conditions, reorganization, automation of non-AI processes).
Category 2 — Equity and fairness impact
When AI affects individuals differently across protected classes, equity externalities accrue. A credit-scoring system that approves at different rates across demographic groups imposes a distributional cost on the disadvantaged group; a hiring system that screens out certain resume patterns imposes a cost on candidates whose resumes fit those patterns.
The Dutch Toeslagenaffaire remains the most-cited case of systemic equity failure from algorithmic decision-making. Parliamentary inquiry documentation quantified the distributional impact — disproportionate wrongful benefit claims against specific ethnic groups — and the corrective actions required.6 For AI value practitioners, the case illustrates that equity externalities are not abstract: they reach monetary quantification through regulatory penalty, compensation liabilities, and reputational damage.
Category 3 — Privacy and surveillance externalities
Data-intensive AI creates privacy externalities. Even when individual consent is secured, the aggregate effect of collecting and processing data at scale changes the privacy landscape in ways individuals cannot negotiate individually. Organizations under GDPR, CCPA, and analogous regimes now face explicit cost pathways (fines, remediation) for privacy externalities; broader ethical accounting assigns costs even where regulatory exposure is absent.
Integration with the value equation
Externality accounting changes the value equation three ways. Each is demonstrated in the Sustainability-Adjusted Value framework of Article 34.
Change 1 — Cost side. Externalities are costs borne by society; responsible accounting puts them on the feature’s TCO. A feature whose carbon cost is quantified is a feature whose TCO includes that carbon.
Change 2 — Risk side. Regulatory externalities (fines, remediation) and reputational externalities (brand damage, customer attrition) are quantifiable risks that enter rNPV sensitivity analysis.
Change 3 — Benefit side. AI features that produce positive externalities — reducing emissions, improving equity, enhancing privacy — get credit for them, just as they should bear cost for negative ones.
The three-way integration prevents the pattern where externalities are acknowledged in an ESG report but ignored in the value conversation. Boards that read one narrative on the financial page and a different narrative on the ESG page lose confidence in both.
Regulatory anchors
Three regulatory regimes shape externality reporting requirements, each with implementation dates that practitioners should track.
EU CSRD (Corporate Sustainability Reporting Directive) and ESRS (European Sustainability Reporting Standards). Phased-in from 2024 through 2028 depending on company size, the CSRD requires disclosure across environmental, social, and governance dimensions. AI-specific disclosure is not separately mandated, but AI’s material contribution to the enterprise carbon footprint is covered under general reporting.
SEC Climate Disclosure Rules. The SEC’s 2024 rules (currently the subject of ongoing legal proceedings) would require US-registered public companies to disclose climate-related risks including those tied to significant AI workloads.
OECD AI Principles and Environmental Compute Work. The OECD’s AI Policy Observatory publishes environmental-compute research that increasingly anchors national AI policy frameworks. Non-binding but influential across OECD member states.4
The practitioner’s job is not to be the compliance expert on any of these — that is the compliance team’s job — but to be the measurement expert who produces the numbers those teams need. Close collaboration with compliance, legal, and sustainability teams is essential; externalities sit at exactly the intersection of all three.
The greenwashing risk
Externality accounting is a domain where sloppy practice invites greenwashing accusations. Three practices protect the program.
Cite sources. Every externality estimate should footnote its source and its computation method. Estimates with no source are indistinguishable from invention.
Report uncertainty. Externality estimates are rarely precise. Reporting point estimates without uncertainty bands is a greenwashing signal; reporting ranges with confidence intervals is credible.
Disclose limits. Methods used today will be improved tomorrow; current estimates may under- or over-state actual impact. Disclosure of method limits, expected future refinements, and ongoing research is a discipline marker.
Cross-reference to Core Stream
EATF-Level-1/M1.5-Art02-The-Global-AI-Regulatory-Landscape.md— regulatory landscape context.- Core Stream treatment of sustainability in AI transformation.
Self-check
- A feature’s inference carbon has been estimated using a published per-request figure for a similar model on a similar provider. What disclosure should accompany the estimate?
- An AI-driven workforce-reduction program reduced headcount by 120; of those, 90 were redeployed and 30 were separated. How is this reported?
- A sustainability report states “our AI emissions are minimal” with no underlying data. What is the likely audit finding?
- An EU-based organization subject to CSRD must disclose AI-related climate risks. What measurement outputs from the AI value practitioner are likely required?
Further reading
- Strubell, Ganesh, and McCallum, Energy and Policy Considerations for Deep Learning in NLP, ACL 2019.
- Patterson et al., The Carbon Footprint of Machine Learning Training Will Plateau, Then Shrink, IEEE Computer 2022.
- OECD AI Policy Observatory, Measuring the Environmental Impacts of AI Compute and Applications (2022, updated 2024).
Footnotes
-
Emma Strubell, Ananya Ganesh, and Andrew McCallum, Energy and Policy Considerations for Deep Learning in NLP, Proceedings of the 57th Annual Meeting of the ACL (2019). https://aclanthology.org/P19-1355/ ↩
-
David Patterson, Joseph Gonzalez, Urs Hölzle, et al., The Carbon Footprint of Machine Learning Training Will Plateau, Then Shrink, IEEE Computer 55, no. 7 (2022). https://doi.org/10.1109/MC.2022.3148714 ↩
-
Stanford Institute for Human-Centered AI, AI Index Report (2024, 2025 editions). https://aiindex.stanford.edu/report/ ↩
-
Organisation for Economic Co-operation and Development, Measuring the Environmental Impacts of AI Compute and Applications, AI Policy Observatory (2022, updated 2024). https://oecd.ai/en/env-aicompute ↩ ↩2
-
MIT Sloan Management Review and Boston Consulting Group, State of AI at Work series (2020–2025). https://sloanreview.mit.edu/ ↩
-
Parlementaire ondervragingscommissie Kinderopvangtoeslag, Ongekend Onrecht (Dutch parliamentary inquiry final report, December 2020). https://www.tweedekamer.nl/kamerstukken/detail?id=2020D51917 ↩