Skip to main content
AITF M1.2-Art29 v1.0 Reviewed 2026-04-06 Open Access
M1.2 The COMPEL Six-Stage Lifecycle

Calibrate: Strategic Inputs You Must Gather Before You Begin

Calibrate: Strategic Inputs You Must Gather Before You Begin — Transformation Design & Program Architecture — Foundation depth — COMPEL Body of Knowledge.

9 min read Article 29 of 29

COMPEL Certification Body of Knowledge — Module 1.2: The COMPEL Six-Stage Lifecycle Definition: Article 29 — Strategic Inputs to Calibrate


Calibrate is the entry stage of the COMPEL lifecycle, but it is not the first piece of organizational work that produces it. Every mature transformation discipline — PMBOK®, SAFe®, TOGAF®, COBIT® — recognizes that the act of measurement is preceded by the act of framing. You cannot calibrate against nothing. You calibrate against an explicit set of strategic intentions, financial guardrails, regulatory obligations, and capability baselines that the enterprise has already committed to. When those inputs are missing, vague, or contradictory, the maturity assessment becomes an exercise in producing numbers that no one will trust or act on.

This article walks through the eight strategic inputs that COMPEL Calibrate consumes, why each one matters, who in the organization provides it, what an acceptable artifact looks like, and which framework it traces back to. It closes with a practical sequence for gathering these inputs in the two to four weeks before the formal Calibrate engagement begins.

Why Calibrate Consumes Upstream Inputs

Industry precedent is clear. PMBOK 7 lists Organizational Process Assets and the Enterprise Environmental Factors as required inputs to every initiating process. SAFe defines Strategic Themes, Portfolio Vision, and Lean Budget guardrails as upstream artifacts that flow into any portfolio-level planning event. TOGAF Phase A — Architecture Vision — explicitly requires the Baseline Architecture and the strategic direction of the enterprise as preconditions before scoping work begins. COBIT 2019 frames its design factors as inputs that govern how the governance system itself is configured.

COMPEL adopts the same discipline. The Calibrate stage refuses to be treated as an open-ended exploration. It is a structured diagnostic that compares an honestly observed current state against an explicitly declared strategic frame. Both halves must exist. Strategic inputs supply the frame.

The Eight Strategic Inputs

1. Corporate / Business Strategy

What it is. The enterprise-level strategic direction that AI transformation must serve — typically a three-year strategic plan, mission and vision statement, or balanced scorecard that articulates where the business intends to compete and how it intends to win.

Why it matters. Calibrate uses this to ensure the maturity assessment and the use case prioritization are tied to real business outcomes rather than to technology novelty. Without it, the COMPEL backlog becomes a collection of interesting experiments instead of a portfolio of strategic bets.

Who provides it. The CEO’s office, head of strategy, or corporate development. In smaller organizations, the founding team.

Example artifact. A signed 3-year strategic plan with measurable strategic objectives.

Framework citation. PMBOK 7 (Organizational Process Assets), SAFe Enterprise Strategy, TOGAF Phase A (Architecture Vision).

2. Portfolio Strategic Themes

What it is. The thematic investment areas that connect enterprise strategy to portfolio-level execution. In SAFe terms, strategic themes are the differentiated business objectives that anchor every portfolio decision.

Why it matters. Strategic themes determine which AI capabilities the organization should build, buy, or defer during Calibrate prioritization. They are the bridge between abstract strategy and concrete capability investment.

Who provides it. The portfolio management office, lean portfolio management function, or — where these do not exist — the executive sponsor and the head of strategy jointly.

Example artifact. SAFe Portfolio Canvas, OKR cascade, or strategic theme statements approved by the executive committee.

Framework citation. SAFe Portfolio (Strategic Themes).

3. Portfolio Vision

What it is. A three- to five-year directional view of the AI portfolio target state — what the enterprise’s AI footprint should look like at the end of the planning horizon, expressed in capability and outcome terms.

Why it matters. Calibrate uses portfolio vision to set aspirational maturity targets and to define the gap between current state and intended future state. Without a target, the maturity score is just a number; with a target, it is a distance to travel.

Who provides it. The CIO, chief AI officer, or transformation sponsor in collaboration with business unit leaders.

Example artifact. A 3-5 year AI portfolio vision deck or future-state capability map.

Framework citation. SAFe Portfolio Vision, TOGAF Architecture Vision.

4. Funding Guidelines and Investment Guardrails

What it is. The financial envelope and lean budget guardrails that constrain AI investment decisions over the planning horizon — including capital allocation policy and any fixed investment horizons.

Why it matters. Calibrate consumes these to keep the use case backlog inside realistic funding horizons. A backlog of 200 use cases against a budget that can fund six is not a portfolio — it is a wishlist.

Who provides it. The CFO, finance business partner for technology, or the lean portfolio management function.

Example artifact. Lean budget guardrails document, AI investment horizons matrix, or capital allocation policy statement.

Framework citation. SAFe Lean Budgets, PMBOK Funding Limit Reconciliation.

5. Risk Appetite and Tolerance Statement

What it is. Board-approved boundaries that articulate how much AI risk the organization is willing to accept — typically expressed as tolerance bands across categories such as model failure, bias, privacy, security, and reputational exposure.

Why it matters. Calibrate uses this statement to set risk thresholds in the maturity model and to flag use cases that exceed tolerance early, before they consume design effort. It is also the anchor for the Regulatory Exposure Register.

Who provides it. The board risk committee, chief risk officer, or — in regulated industries — the compliance and risk function jointly.

Example artifact. A board-approved AI risk appetite statement with tolerance bands and an enterprise risk register entry.

Framework citation. NIST AI RMF (Govern function), ISO 42001 Clause 6, COSO ERM.

6. Regulatory and Compliance Landscape

What it is. A current view of the laws, regulations, and standards applicable to the organization’s AI footprint across every jurisdiction in scope. This includes the EU AI Act, sector regulations such as HIPAA or PCI DSS, and emerging national frameworks.

Why it matters. Calibrate uses this to ensure the regulatory exposure mapping covers every jurisdiction and obligation. Missing a jurisdiction in Calibrate produces a maturity baseline that is structurally blind to a class of compliance risk.

Who provides it. The general counsel, head of compliance, or regulatory affairs function.

Example artifact. Regulatory applicability matrix, compliance obligations register, jurisdictional exposure map.

Framework citation. ISO 42001 Clause 4.2, NIST AI RMF Govern 1.1, EU AI Act readiness assessment.

7. Existing Capability Baseline

What it is. A grounded inventory of the AI, data, and talent capabilities already in place — what the organization can actually do today, as opposed to what it claims it can do.

Why it matters. Calibrate consumes this baseline to compute maturity scores against an honest current state rather than against aspiration. Without it, the maturity scorecard tends to drift toward whatever the most senior person in the room wants to believe.

Who provides it. Enterprise architecture, data office, and HR or talent management — typically as a joint exercise.

Example artifact. AI asset inventory, data estate map, talent skills matrix.

Framework citation. TOGAF Baseline Architecture, COBIT 2019 Design Factors.

8. Stakeholder Mandate and Sponsor Commitment

What it is. A signed commitment from executive sponsors authorizing the transformation, allocating human and financial capital, and chartering the work. In change management terms, this is the air-cover that lets the program survive its first contact with the operating reality.

Why it matters. Calibrate uses this to validate that the program has genuine sponsorship before doing assessment work. Running a maturity assessment without a real mandate produces a report that nobody owns and nothing acts on.

Who provides it. The executive sponsor, supported by a steering committee or change coalition.

Example artifact. Signed sponsor charter, executive change coalition roster, steering committee mandate.

Framework citation. Prosci® ADKAR® (Phase 1 Awareness), Kotter Step 1 (Establish Urgency), PMBOK Initiating Process Group.

How to Gather These Inputs

Most organizations do not have all eight inputs sitting in a single folder. The practical sequence is a two- to four-week pre-Calibrate intake.

  1. Convene the intake group. The transformation lead, executive sponsor, head of strategy, CFO delegate, general counsel delegate, enterprise architect, and HR or talent lead. Seven people, two hours, one room.
  2. Walk the eight inputs. For each input, identify the artifact that already exists, the artifact that needs to be drafted, and the owner.
  3. Draft what is missing. Templates for each input live in the COMPEL artifact library: the strategic-themes worksheet, the portfolio-vision canvas, the AI risk appetite statement, the regulatory applicability matrix, the capability baseline template, and the sponsor charter template.
  4. Approve at the steering committee. The eight inputs together form the Calibrate intake pack. The steering committee approves the pack as a single object before the formal maturity assessment begins.
  5. Lock the baseline. Once approved, the intake pack is versioned and frozen. Subsequent changes to corporate strategy or risk appetite become formal change requests that may trigger a Calibrate re-run.

When the eight inputs are complete and approved, Calibrate becomes a structured diagnostic with a defined frame. When they are not, Calibrate becomes a months-long discovery exercise that produces opinions instead of evidence.

Cross-References

  • NIST AI RMF — the Govern function defines risk appetite, organizational context, and accountability inputs that map directly to inputs 5 and 8.
  • ISO 42001 — Clause 4.2 (understanding the needs and expectations of interested parties) and Clause 6 (planning) require artifacts that overlap with inputs 1, 5, and 6.
  • SAFe — Strategic themes, portfolio vision, and lean budgets are the SAFe-native expressions of inputs 2, 3, and 4. COMPEL Calibrate maps explicitly to the SAFe portfolio level.
  • TOGAF — Phase A Architecture Vision and the Baseline Architecture map to inputs 3 and 7. Calibrate functions as a domain-specific Architecture Vision for AI capability.
  • PMBOK 7 — Organizational Process Assets and the Initiating Process Group define the procedural expectations that COMPEL inputs 1 and 8 satisfy.
  • COBIT 2019 — Design factors and the governance system design workflow underpin input 7 and constrain how the maturity model is configured.

The discipline COMPEL adds is not the existence of these inputs — every mature framework requires them. The discipline is making the dependency explicit, naming the eight inputs by name, and refusing to start Calibrate until they are in hand.