Skip to main content
AITE M1.3-Art74 v1.0 Reviewed 2026-04-06 Open Access
M1.3 The 20-Domain Maturity Model
AITF · Foundations

Template 4: Value Realization Report (VRR)

Template 4: Value Realization Report (VRR) — Maturity Assessment & Diagnostics — Advanced depth — COMPEL Body of Knowledge.

7 min read Article 74 of 48 Calibrate

COMPEL Specialization — AITE-VDT: AI Value & Analytics Expert Template 4 of 5


This template produces a VRR to Article 16’s six-section specification. The output is typically 8–15 pages for a single feature — executive-summary consumers read only Section 1, audit-committee consumers read the full document, regulators consume the document plus appendices.


Value Realization Report: [Feature Name]

Feature: [Short name] Reporting period: [Q1 2026 / Month YYYY] Report version: 1.0 Date of this report: [Date] Prepared by: [AI value lead, name] Reviewed by: [List of reviewers: FinOps lead, compliance, communications] Aligned to: ISO 42001 Clause 9.3; NIST AI RMF MEASURE function


1. Executive summary

Feature: [One sentence: what the feature does, who uses it, what outcome it targets.]

Reporting period realized value: [$ amount — one number, with counterfactual method named in parens]

Status: [Green / Yellow / Red]

Primary finding: [One sentence — the headline of the quarter]

Decision requested (if applicable): [One sentence — the specific decision the reader is being asked to make, with date.]

Portfolio context

  • Feature’s position on quarterly portfolio scorecard: [Reference]
  • Cumulative realized value to date: [$]
  • Cumulative investment to date: [$]
  • Current cumulative payback ratio: [X.Y]

What changed since last report

[Two to three sentences on significant changes: launches, incidents, decisions, environment shifts.]


2. KPI tree and realized values

KPI tree reference: [Link or embedded visualization of the tree]

Outcome-level (Level 1): [Name]

MetricPeriod valuePrior periodTargetStatus
[Outcome metric][Value][Value][Target][🟢🟡🔴]

Driver-level (Level 2):

DriverPeriod valuePrior periodTargetStatus
[Driver 1][Value][Value][Target][🟢🟡🔴]
[Driver 2][Value][Value][Target][🟢🟡🔴]
[Driver 3][Value][Value][Target][🟢🟡🔴]

Leaf metrics (Level 3): [Table or embedded dashboard link]

Evidence notes

  • Reproducibility: [Statement that every metric is reproducible from source data, with any gaps disclosed.]
  • Source-system status: [Any source-system issues during the period that affected metric quality.]
  • Methodology changes: [Any changes to metric computation this period and their impact.]

3. Counterfactual narrative

Counterfactual method: [A/B test / DiD / RDD / Synthetic control / PSM / Pre/post]

Rationale for method choice: [One paragraph — why this design is appropriate for this feature’s rollout shape and data availability.]

Specification: [Brief statement of the model specification — the equation, the estimator, the clustering.]

Point estimate: [X% or $X]

Uncertainty band: [95% CI: [low] — [high]; or p10/p50/p90 from Monte Carlo]

Robustness checks performed:

  • [Parallel-trends / placebo / bandwidth / leave-one-out / Rosenbaum bounds]
  • [Result of each check]

Known limitations:

  • [Limitation 1 and its disclosure]
  • [Limitation 2 and its disclosure]

Narrative interpretation: [One paragraph that translates the quantitative result into business language, with uncertainty preserved. Example: “Under the DiD specification, the feature contributed a 6% improvement in the primary metric, with 95% CI 3.5%–8.5%. At current scale, this translates to approximately $14M in annualized realized value. The estimate is robust to excluding the non-randomized pilot office and to alternative estimators; it is less confident in the lower-than-average practice areas, where separate analyses are ongoing.”]


4. Financial summary

Realized value

PeriodRealized valueAttribution model
[This period][$][Shapley / Linear / etc.]
[Prior period][$][Same]
[Cumulative to date][$][Same]

Attribution-sensitivity: [Same figures under 1 or 2 alternative attribution models, if material.]

Total cost of ownership (TCO)

ComponentThis periodPrior periodCumulative
Build (amortized)[$][$][$]
Run (inference + platform)[$][$][$]
Model refresh[$][$][$]
Governance[$][$][$]
Total TCO[$][$][$]

rNPV trajectory

  • Original business-case rNPV: [$]
  • Current estimated rNPV based on realized-data update: [$]
  • Variance: [$ or %]
  • Driver of variance: [Higher/lower benefit than projected / cost overrun / timing shift]

Budget-vs-actual (compute)

  • Budget this period: [$]
  • Actual this period: [$]
  • Variance: [$ or %]
  • Enforcement actions during period: [Alert / Throttle / Reject / Review]

Sustainability-Adjusted Value (SAV)

  • Core realized value: [$]
  • Externality adjustment: [$ — with breakdown if material]
  • SAV: [$]
  • Trajectory vs. prior period: [+/-%]

5. Risk flags

Value risks

RiskProbabilityImpactMitigation statusOwner
[Drift signal X][L/M/H][L/M/H][Monitoring active][Role]
[Adoption plateau][L/M/H][L/M/H][Change-mgmt plan][Role]

Operational risks

RiskProbabilityImpactMitigation statusOwner
[Cost overrun projected][L/M/H][L/M/H][FinOps actions][Role]
[Evaluation-harness gap][L/M/H][L/M/H][Remediation plan][Role]

Governance risks

RiskProbabilityImpactMitigation statusOwner
[Regulatory change][L/M/H][L/M/H][Legal review][Role]
[Control gap flagged][L/M/H][L/M/H][Remediation underway][Role]

Risks materialized this period

[List any risks that were hypothetical in last report and now are actual.]

Risks resolved this period

[List any risks that have been closed.]


6. Recommendation

Recommendation: [Continue / change / stop]

Detail:

  • [Continue:] [One paragraph — why, with reference to the analysis]
  • [Change:] [One paragraph — what scope, design, or operational change is proposed and why]
  • [Stop:] [One paragraph — sunset case reference; see Template 5 equivalent for sunset case structure]

Supporting evidence: [Reference sections 2, 3, 4, 5 of this report]

Dissent (if any): [Any reviewer whose disagreement should be recorded in the decision log]

Decision log entry

WhoDecidedDateSupporting evidenceDissent
[Decision-maker][Continue/change/stop + details][Date][Reference][If any]

Appendix A — Detailed tables

[Full metric table with all leaf metrics, all periods, and full computation trail. Multi-page if needed.]


Appendix B — Analysis methodology detail

[Full mathematical specification of the counterfactual analysis, diagnostic-test results, robustness-check outputs, sensitivity analyses.]


Appendix C — Source documents

  • Measurement plan: [Reference]
  • Business case (original): [Reference]
  • Business case (most recent update): [Reference]
  • KPI tree: [Reference]
  • Compute budget: [Reference]
  • Prior VRRs: [References]

Appendix D — Regulatory and standards alignment

StandardClauseHow this VRR addresses it
ISO/IEC 42001:2023Clause 9.3 (management review outputs)Sections 1, 2, 3, 5, 6
NIST AI RMF 1.0MEASURE 2.1 (standardized methods over time)Section 3
NIST AI RMF 1.0MEASURE 4.1 (communication)Entire document
EU AI Act (if applicable)Article 15 (accuracy reporting)Sections 2, 3
CSRD (if applicable)ESRS E1 / S1Section 4 SAV