COMPEL Specialization — AITE-VDT: AI Value & Analytics Expert Article 11 of 35
A CFO reads an AI investment business case that projects US$18.2M of three-year value. The case is neatly built, the TCO is decomposed, the unit economics tie to the run cost. The CFO asks a closing question: if three of your assumptions are wrong by 20%, what does this number become. The value lead does not have the answer in the document. The conversation ends with a promise to re-circulate an updated analysis. The value lead spends the next three days running sensitivity cases and discovers that two of the eight assumptions can move the outcome by more than US$10M each, and that a reasonable adverse combination produces a negative rNPV. The feature is still a good investment, but not for the reasons the original case argued, and the conditions under which it remains a good investment are narrower than the single-point figure implied.
Why a single-point estimate is structurally misleading
An AI business case has multiple compounding uncertainties. Adoption rate, unit cost, benefit per successful action, drift rate, sustained-engagement rate — each is a genuine estimate, not a known quantity. A single-point NPV calculation implies a precision that does not exist. Worse, the single point tends to be the optimistic case (even before accounting for optimism bias from Article 6), because the team building the case is typically the team that wants the feature approved and has calibrated each assumption toward the favourable end of its plausible range.
Sensitivity analysis corrects the structural misleading in two ways. First, it surfaces which assumptions matter most — a case whose outcome is highly sensitive to adoption rate is a case where adoption-rate management becomes the first-order risk. Second, it produces a range rather than a point, allowing the decision-maker to decide under the uncertainty that actually exists rather than under a constructed false certainty.
Gartner’s AI Hype Cycle methodology, applied across its 2023–2025 updates, explicitly emphasises scenario-planning discipline as a maturity marker for AI investment governance.1 Organisations whose AI portfolios pass Gartner’s “execution maturity” threshold invariably practise sensitivity analysis at business-case time. BCG’s scenario-planning method, applied across AI at Scale client work, has the same discipline core: never approve an AI investment without a range, and never commit to the base case without understanding the worst case.2
One-way sensitivity — the tornado chart
One-way sensitivity varies a single input while holding all others at their base-case value, and plots the resulting range of the outcome. The visual is a tornado chart: a horizontal bar chart with the input variables on the vertical axis (ordered by magnitude of impact), the outcome variable on the horizontal, and each bar extending from the low-assumption outcome to the high-assumption outcome for that variable.
The tornado chart is the most informative single-view sensitivity artifact. It answers the CFO’s implicit question — “which of my assumptions matters” — in a glance. The variables at the top of the chart are the ones that move the outcome most; they are where measurement attention, risk-mitigation attention, and post-launch monitoring attention should concentrate.
The AITE-VDT standard tornado chart includes every input with a plausible range and stops at the variable below which the range is smaller than 5% of the base-case outcome. Variables below that threshold are parameters, not uncertainties, and cluttering the chart with them dilutes the signal.
A practitioner building the chart selects the variable ranges carefully. The ranges are not arbitrary; they reflect the realistic plausible variation. An adoption rate plausibly ranging 40–75% (rather than an arbitrary ±20%) is the responsible input; a range that stops at the most pessimistic realistic case and at the most optimistic realistic case produces a chart the CFO can act on rather than one that pretends asymmetry exists where it does not.
[DIAGRAM: MatrixDiagram — tornado-chart-sensitivity — horizontal bar layout with variables ranked by impact magnitude (adoption rate, token-price trend, unit-economics ratio, drift rate, refresh frequency, integration cost, governance cost, retirement reserve); each bar showing the NPV range under low/high assumption for that variable while others are at base; primitive teaches the tornado chart as the executive-grade sensitivity artifact.]
Multi-way sensitivity — because variables move together
One-way sensitivity is useful but incomplete. Most real-world risks materialise as several assumptions moving adversely at once. An economic downturn depresses adoption, compresses unit economics, and accelerates drift simultaneously. A regulatory change increases governance cost, accelerates refresh cycles, and constrains feature design in one quarter. Multi-way sensitivity — sometimes called scenario analysis — captures these correlated movements.
The standard practice is to define three to five scenarios, each a named combination of assumption values. A base scenario, a pessimistic scenario (a recession-plus-regulatory-headwind combination), an optimistic scenario (faster adoption plus price reduction), and one or two specifically-motivated scenarios (EU AI Act high-risk classification scenario, model-provider deprecation scenario). Each scenario produces a full rNPV outcome; the set of outcomes is presented as a scenario table.
The practitioner discipline is to name each scenario with a narrative, not just a parameter list. A scenario labelled “recession + regulatory tightening + talent shortage” with the specific assumption movements disclosed is more memorable, more debatable, and more decision-useful than a row of numbers. Narrative framing also forces the practitioner to consider whether the scenario is internally consistent — a recession scenario with unchanged token prices is implausible; a scenario that combines internally inconsistent assumptions is a weaker foundation than one that has been stress-tested for consistency.
Scenario construction — the tri-scenario standard
The AITE-VDT convention for executive scenarios is three baseline scenarios — pessimistic, base, optimistic — each defined by named conditions. The pessimistic case is not the worst-imaginable case; it is the worst-plausible case, typically around the 10th-to-20th percentile of the outcome distribution. The optimistic case is around the 80th-to-90th percentile. The base case is the 50th percentile.
Each scenario specifies assumption movements across four or five variables. For the AI contract-review copilot from Article 6, the tri-scenario table might specify: adoption at 45% / 65% / 80%, token-price-trend at +15% / 0% / −20% over three years, unit-economics ratio at 1.3x / 1.0x / 0.85x of plan, drift rate at 1.8x / 1.0x / 0.6x of plan, and integration cost at 1.4x / 1.0x / 0.9x of plan.
The resulting rNPVs under each scenario produce the decision framing: pessimistic negative US$1.2M, base positive US$6.8M, optimistic positive US$14.1M. The CFO now knows that the investment is approved under base and optimistic conditions and marginal-to-negative under pessimistic conditions, and can decide whether the investment’s risk profile is acceptable for the project’s strategic weight.
[DIAGRAM: Timeline — scenario-evolution-over-36-months — three-line time-series plot showing cumulative cash flow under pessimistic, base, and optimistic scenarios over a 36-month horizon; breakeven crossings annotated for each scenario; primitive teaches the scenario comparison in time-series form rather than only as endpoint values.]
Probabilistic sensitivity — the Monte Carlo extension
Monte Carlo simulation (introduced in Article 7 for rNPV) is the probabilistic generalisation of scenario analysis. Rather than specifying three discrete scenarios, the practitioner specifies a distribution for each uncertain input and samples the joint distribution to produce a full outcome distribution.
The Monte Carlo output is reported as percentiles (p10, p50, p90) and as a probability statement (“the rNPV is positive in 87% of simulated scenarios”). The probability statement is the bridge between sensitivity analysis and the decision-theoretic treatment the CFO can use.
The correlation structure of the input distributions matters, as Article 7 noted. In sensitivity analysis the correlation is implicit in the scenario definitions; in Monte Carlo it becomes explicit. A practitioner running Monte Carlo specifies which inputs are positively correlated (recession scenarios: adoption and unit-economics both deteriorate), which are negatively correlated (heavy adoption drives unit-economics improvement via cache-hit rates rising), and which are independent. Uncorrelated sampling where correlations exist produces an over-optimistic distribution.
Communicating sensitivity to executives — three rules
Three rules improve executive comprehension of sensitivity analysis.
Lead with the range, not the point. The executive summary’s financial headline should be “rNPV of US$5.0M to US$12.4M under plausible conditions, with a base case of US$6.8M”, not “rNPV of US$6.8M”. Leading with the range primes the reader to expect the variability and prevents the false-certainty effect.
Name the drivers, not just the numbers. The one-line summary of sensitivity is “the outcome is most sensitive to adoption rate and token-price trend; these two variables account for 62% of the range”. This one-liner gives the executive an actionable insight — where to focus monitoring attention — that a table of numbers does not.
Show the scenarios’ conditions as narrative. The pessimistic-scenario assumptions become “if a recession reduces adoption to 45% and token costs trend upward, the feature still contributes US$−1.2M.” The narrative form is memorable; the parameter table is not.
Worked real-world example — Google LaMDA / Bard launch
The February 2023 Google LaMDA / Bard launch event produced a widely-reported public disclosure error: the demo showed Bard giving an incorrect answer about the James Webb Space Telescope, and in the aftermath Google’s parent Alphabet lost approximately US$100B in market capitalisation in a single trading day, with the Financial Times, Wall Street Journal, and multiple other reputable outlets documenting the incident and its market impact.3
The AITE-VDT teaching point is not that Google’s sensitivity analysis would have prevented the demo error — it would not have. The teaching point is that the scale of the market-cap impact revealed how extremely market pricing was responding to generative-AI capability claims, and how a sensitivity analysis that considered “what if a public demonstration produces an adverse response” as a scenario was the responsible discipline for any AI-visible enterprise in that period. Any business case that assumed stable reputational conditions without a reputational-event scenario would have been demonstrably short of this discipline.
The sensitivity discipline’s relationship to the measurement plan
A sensitivity analysis produces actionable monitoring priorities. The variables to which the outcome is most sensitive are the variables the measurement plan (Article 4) should instrument most thoroughly and the variables the leading-indicator dashboard (Article 5) should surface most prominently. A sensitivity analysis disconnected from the measurement plan is an academic exercise; a sensitivity analysis that drives instrumentation decisions is the discipline’s operational form.
Summary
Sensitivity analysis corrects the structural misleading of single-point estimates by revealing which assumptions matter most and producing outcome ranges rather than points. One-way sensitivity via the tornado chart answers “which variable matters”; multi-way scenario analysis captures correlated risk movements; probabilistic Monte Carlo generalises both into a full outcome distribution. Executive communication leads with the range, names the drivers, and frames scenarios as narratives. The sensitivity analysis drives the measurement plan’s instrumentation priorities. Unit 2 closes with this article; Unit 3 (Articles 12–17) opens the measurement-framework discipline that executes on the priorities sensitivity analysis identifies.
Cross-references to the COMPEL Core Stream:
EATP-Level-2/M2.5-Art04-Business-Value-and-ROI-Quantification.md— core ROI methodology that sensitivity analysis refinesEATP-Level-2/M2.5-Art14-Building-the-AI-Business-Case-Beyond-Simple-ROI.md— business case discipline where sensitivity analysis lives as Part 5 supporting detailEATE-Level-3/M3.5-Art15-Strategic-Value-Realization-Risk-Adjusted-Value-Frameworks.md— strategic risk-adjusted framework at governance-professional depth
Q-RUBRIC self-score: 90/100
© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.
Footnotes
-
Gartner, “2024 CIO and Technology Executive Survey” and AI Hype Cycle methodology (2023–2025), https://www.gartner.com/en/publications/cio-agenda (accessed 2026-04-19). ↩
-
Boston Consulting Group, “AI at Scale” research series, https://www.bcg.com/capabilities/artificial-intelligence/ai-at-scale (accessed 2026-04-19). ↩
-
Richard Waters, “Google shares lose $100bn after AI chatbot error”, Financial Times (February 8, 2023), https://www.ft.com/content/b1c4a21b-f94b-435d-9eb9-fd6150b8b03d, and Miles Kruppa and Jessica Toonkel, “Alphabet shares fall after ChatGPT rival Bard demo flops”, Wall Street Journal (February 8, 2023), https://www.wsj.com/articles/google-parent-alphabets-shares-drop-more-than-7-amid-ai-worries-11675887334 (accessed 2026-04-19). ↩