Skip to main content
AITE M1.3-Art71 v1.0 Reviewed 2026-04-06 Open Access
M1.3 The 20-Domain Maturity Model
AITF · Foundations

Template 1: Measurement Plan (11 Sections)

Template 1: Measurement Plan (11 Sections) — Maturity Assessment & Diagnostics — Advanced depth — COMPEL Body of Knowledge.

6 min read Article 71 of 48 Calibrate

COMPEL Specialization — AITE-VDT: AI Value & Analytics Expert Template 1 of 5


This is a reusable measurement-plan template built to Article 4’s specification. Use it as Word/Google Doc or Markdown. Replace bracketed placeholders [like this] with feature-specific content. Keep section headings and numbering; they align to ISO 42001 Clause 9.1 audit requirements.


Measurement Plan: [Feature Name]

Feature: [Short name and description] Document owner: [Role, name] Version: 1.0 Pre-registration date: [Date locked; changes after this date require change-control] Approvers: [Feature lead, AI value lead, FinOps lead, Sponsor] Aligned to: ISO 42001 Clause 9.1; NIST AI RMF MEASURE 1.1


1. Hypothesis

Primary hypothesis (falsifiable): [Deploying to will change by , holding constant.]

Business theory of change: [Why this feature should produce the hypothesized effect — the causal chain from feature action to business outcome.]

Scope boundaries: [What population/segment/geography this hypothesis applies to, and what it does not.]


2. Primary metric

Metric name: [Name] Definition: [Precise computation: what is counted, from which source system, over what period.] Aggregation rule: [Sum, mean, median, rate, percentile, etc.] Source system: [Name of data source] Refresh cadence: [Real-time, hourly, daily, weekly] Known limitations: [Data-quality issues, coverage gaps, definitional ambiguities.]


3. Secondary metrics

List 3–5 secondary metrics. Each with the same specification depth as the primary.

Secondary metric 1 — [Name]

Secondary metric 2 — [Name]

[…]

Secondary metric 3 — [Name]

[…]

(Add more sections if needed; avoid exceeding 5 secondary metrics.)


4. Data sources

#SourceOwnerRefresh cadenceFeeds intoKnown limitations
1[Source system][Team/role][Daily/etc][Primary/secondary][…]
2[…][…][…][…][…]

5. Collection cadence

Measurement cadence: [How often are metrics computed?] Reporting cadence: [How often are metrics reviewed by each audience?]

AudienceCadenceFormat
Feature team[Weekly][Dashboard drill-down]
AI value lead[Weekly][Dashboard summary]
FinOps lead[Monthly][Cost-aware report]
Steering committee[Quarterly][VRR]
Board / audit committee[Quarterly][Board-grade summary]

6. Analysis method

Selected causal-inference design: [A/B test / DiD / RDD / Synthetic control / PSM / Pre/post]

Rationale (Article 18 six-question tree):

  • Q1 randomize possible? [Yes/No; why]
  • Q2 staged rollout with timing variation? [Yes/No]
  • Q3 threshold-driven assignment? [Yes/No]
  • Q4 unique treated unit? [Yes/No]
  • Q5 rich observational data on plausible confounders? [Yes/No]
  • Final choice: [Named design]

Specification: [Mathematical specification or verbal description of the analysis model.]

Known limitations of the chosen design: [Parallel-trends assumption / manipulation risk / matching-on-observables limit / etc. Explicit disclosure.]

Robustness checks: [Placebo tests, bandwidth sensitivity, leave-one-out, Rosenbaum bounds, etc. As applicable.]

Estimator: [Two-way fixed effects / Callaway-Sant’Anna / local-linear with CCT / etc.]

Minimum detectable effect (MDE): [At 80% power, α 0.05: the smallest effect the design can detect. Computed from sample size and variance.]


7. Decision rule

Continue if: [Primary metric meets or exceeds threshold; secondary metrics within tolerance; risk flags none or yellow.]

Modify if: [Primary metric below threshold but above floor; one secondary metric flags concern; one or more risk flags red.]

Retire if: [Primary metric below floor; causal effect insignificant; multiple risk flags red; TCO exceeds realized value for two consecutive review periods.]

Thresholds (numeric):

  • Continue threshold: [≥ X% effect on primary metric]
  • Modify threshold: [Between X% and Y% effect]
  • Retire threshold: [< Y% effect or negative]

8. Pre-registration

Pre-registration record location: [URL or document reference] Date locked: [Date] Change-control process: [Who can authorize changes; what record is kept; how changes propagate to reporting.]

Pre-registered items:

  • Primary hypothesis
  • Primary metric definition
  • Analysis method and specification
  • MDE
  • Decision rule thresholds
  • Stopping rules (if applicable)

9. Review owners

ReviewOwner (role)Frequency
Measurement plan maintenance[AI value lead]Quarterly
Operational review[Feature lead]Weekly
Value review[AI value lead + FinOps lead]Monthly
Stage-gate review[Program sponsor]At each COMPEL gate
Board-grade review[AI value lead + CFO]Quarterly

10. Risk flags

List the 3 most significant measurement risks. For each:

Risk 1 — [Name]

Description: [What could go wrong with measurement or interpretation] Probability: [Low / Medium / High] Impact: [Low / Medium / High] Mitigation: [Specific actions] Escalation trigger: [What signal escalates this risk] Owner: [Role]

Risk 2 — [Name]

[…]

Risk 3 — [Name]

[…]


11. Escalation path

Triggering conditions: [Specific signals that require escalation beyond the standard review cadence.]

Escalation chain:

  1. [Feature lead]
  2. [AI value lead]
  3. [CFO / Program sponsor]
  4. [Steering committee]
  5. [Board / audit committee]

Timeline: [How quickly escalation must occur after trigger.]

Supporting evidence required: [Minimum evidence pack for escalation: data, counterfactual, recommendation.]


Sign-offs

RoleNameSignature / Date
Feature lead[Name]
AI value lead[Name]
FinOps lead[Name]
Sponsor[Name]
AI Governance reviewer[Name]

Appendix A — Regulatory and standards alignment

StandardClause / SubcategoryHow this plan addresses it
ISO/IEC 42001:2023Clause 9.1Sections 2, 3, 4, 5
NIST AI RMF 1.0MEASURE 1.1Sections 2, 3, 6
NIST AI RMF 1.0MEASURE 2.1Sections 5, 6
GAO AI AccountabilityPerformance monitoringSections 5, 7, 9
EU AI Act (if applicable)Article 15 (accuracy)Section 6

Appendix B — Change log

VersionDateChanged byChangeReason
1.0[Date][Name]Initial versionPre-registration
1.1[Date][Name][Change description][Rationale]