COMPEL Specialization — AITM-OMR: AI Operating Model Associate Article 10 of 10
The
The Blueprint’s purpose
Three audiences consume the Blueprint, and the specialist’s design must work for all three. The executive sponsor reads the Blueprint to validate that the operating-model design is defensible — that the archetype is right, the accountability is clear, the funding is sustainable. The executive typically reads the executive summary, the risks section, and the cadence section. The operational leader — CoE director, business-unit AI lead, programme manager — reads the Blueprint to execute against it. The operational leader consumes the dimension-by-dimension detail, the role descriptions, and the transition plan. The practitioner consumes the Blueprint selectively as reference when specific questions arise — which framework applies to this decision, how is the platform funded, what is the escalation path.
The multi-audience construction principle means the Blueprint has three modes of reading. The executive mode is a fifteen-minute read of the summary and headlines. The operational mode is a half-day read of the full document. The reference mode is a ten-minute lookup of a specific question. A Blueprint that serves only one of the three modes has failed the others.
Blueprint sections
Nine sections compose the Blueprint.
Executive summary (two pages)
The executive summary is the single most-read section. It names the archetype choice, the CoE charter in one paragraph, the decision-rights design in one paragraph, the funding model in one paragraph, the top three risks, and the top three first-year priorities. The summary is standalone; an executive who reads only this page understands the operating model’s shape and can defend it to the board. The specialist who buries the archetype choice in section four while leading with process diagrams has misjudged the audience.
Current state (three to five pages)
The current-state section describes the operating model as it exists today. It uses the ten-dimension framework and scores each dimension at its current maturity level. The current-state section is honest rather than aspirational — a dimension that is actually at nascent is reported at nascent, not at emerging or scaling to spare anyone’s feelings. The honest assessment is what gives the target-state credibility; an unrealistic current-state produces a target-state that the organization cannot reach.
Target state — ten dimensions (fifteen to twenty pages)
The core of the Blueprint is the dimension-by-dimension target-state design. Each of the ten dimensions gets a two-page section covering the design decisions, the rationale, the evidence base, and the interfaces with adjacent dimensions. The sections are the working content that operational leaders consume; they carry enough detail to execute and enough rationale to defend. Articles 2 through 9 have walked through the content of each dimension-section. The Blueprint’s contribution is to present them together, cross-referenced, so that the operational leader executing on decision rights can see how decision rights interact with funding and how funding interacts with talent.
Transition plan (five to ten pages)
The transition plan names how the organization moves from current state to target state. It inherits from Article 9 the evolution-sequence discipline and the change-capacity test. The plan describes the first twelve months in detail, the following twenty-four months in broad terms, and the beyond-thirty-six-month horizon as a target posture rather than a specific plan. The transition-plan section is often the second-most-read after the executive summary, because every executive reading the Blueprint needs to know what happens next.
Risks and mitigations (two to three pages)
The risks section names the three to seven risks the specialist has identified and, for each, the mitigation strategy the Blueprint builds in. Risks commonly include: sponsor continuity (what happens if the sponsor leaves), talent retention (what happens if key roles turn over), technology shift (what happens if the AI landscape shifts materially), regulatory change (what happens if the EU AI Act interpretation evolves), and organizational change (what happens if the enterprise reorganizes). Naming risks is not pessimism; it is discipline. A Blueprint without a risks section is either dishonest or incomplete.
Governance cadence (one to two pages)
The governance cadence section names the recurring meetings, reviews, and decision gates that keep the operating model alive. A typical cadence includes a monthly operations review (CoE operational metrics, incidents, platform performance), a quarterly steering meeting (strategic direction, major investment decisions, cross-dimension issues), an annual Blueprint review (full re-examination of every dimension against current-state evidence), and ad-hoc executive escalation paths for incidents and regulatory events. The cadence is what turns the Blueprint from a one-time deliverable into a living governance structure.
Roles and accountabilities (two to three pages)
The roles and accountabilities section lists the named roles the operating model depends on — CoE director, head of AI governance, accountable executive, business-unit AI lead, model risk owner — with a one-paragraph description of each. The section makes explicit what the accountability matrix from Article 5 captures in structured form. The role descriptions survive the specific people; when a role-holder departs, the role description describes what the replacement must do.
Measurement and reporting (one to two pages)
The measurement section names the metrics the operating model will track and the reporting rhythm. It covers CoE value metrics (from Article 4), decision-rights auditability metrics (from Article 5), funding and cost-to-serve metrics (from Article 6), talent retention and pipeline metrics (from Article 7), and maturity-progression metrics (from Article 9). The section also names how the measurement feeds the cadence — which metrics go to the monthly operations review, which to the quarterly steering meeting, which to the annual Blueprint review.
Appendices (variable)
Appendices contain the working artifacts: the full capability map, the detailed accountability matrix, the service catalogue for the CoE, the funding model calculations, the talent plan, the integration-mapping document, the maturity scorecard with evidence. Appendices are reference content rather than reading content; they let the specialist produce the Blueprint without losing the supporting detail, and let the operational leader dig into specifics when needed.
[DIAGRAM: HubSpoke — blueprint-section-map — central hub labelled “Operating Model Blueprint” with nine radiating spokes labelled Executive Summary, Current State, Target State (Ten Dimensions), Transition Plan, Risks and Mitigations, Governance Cadence, Roles and Accountabilities, Measurement and Reporting, Appendices; each spoke shows page-count expectation; primitive shows the full Blueprint structure in one view]
The BBVA operating-model precedent
Few enterprise AI operating-model transformations have been documented as thoroughly as BBVA’s digital-and-AI transformation between 2019 and 2023, described across multiple Harvard Business Review case studies. The bank’s published approach emphasized several Blueprint disciplines: executive-level engagement (the board and CEO participated in the multi-year direction-setting rather than delegating to a technology function), dimension-by-dimension design (the bank explicitly advanced talent, platform, governance, and commercial dimensions as linked but distinct strands), and named transition stages (each year had identified deliverables rather than an aspirational end-state).
The precedent matters because it shows the Blueprint discipline at scale. BBVA’s transformation produced durable change — the structures put in place during the transformation period were still operating years later, which is the durability test the Blueprint aims for. Specialists who want a public example of the Blueprint pattern applied in practice can study the BBVA case and the MIT Sloan Management Review’s ongoing “Building the AI-Powered Organization” research series for parallel documentation of other enterprises’ equivalent work.1
Governance cadence in detail
The governance cadence deserves elaboration because it is the section most often shortchanged. A Blueprint with a thorough current-state, target-state, and transition plan but a thin cadence produces a structure that works for the first year and fades in the second. The cadence is the mechanism that keeps the Blueprint alive.
[DIAGRAM: OrganizationalMappingBridge — governance-cadence — three horizontal lanes labelled “Monthly”, “Quarterly”, “Annual”; within each lane, named meeting types (Monthly Operations Review, Quarterly Steering Review, Annual Blueprint Review) with participants (operations director, CoE director, accountable executive, business-unit leads), inputs (metrics dashboards, incident reports, pipeline status), and outputs (decisions recorded in decision register); a fourth lane labelled “Ad-hoc” shows escalation paths for incidents and regulatory events; primitive makes the rhythm and governance visible]
The monthly operations review is the cadence’s heartbeat. An hour with the CoE director, the head of AI operations, and the head of AI governance, reviewing the prior month’s operational metrics, open incidents, platform performance, and upcoming decisions. The review is operational; it keeps the operating model functioning. The quarterly steering meeting is the strategic cadence. Two to three hours with the accountable executive, the CoE director, the head of AI governance, and the head of business-unit AI engagement, reviewing the quarter’s outcomes against targets, making strategic decisions within the Blueprint’s envelope, and naming any direction changes the Blueprint needs to accommodate. The annual Blueprint review is the full re-examination. A half-day or more, involving the full operating-model leadership plus representative business-unit leadership, re-examining each dimension against the year’s evidence, adjusting target-state maturity levels where appropriate, and producing the next year’s transition-plan refresh.
The cadence is the governance discipline that survives leadership transitions. When the accountable executive changes, the cadence continues and the incoming executive inherits a functioning rhythm rather than starting over. When the CoE director changes, the cadence continues and the incoming director has a clear set of meetings, inputs, and outputs to step into. A Blueprint without a cadence is a one-person document; a Blueprint with a cadence is institutional infrastructure.
Delivering the Blueprint
The Blueprint’s delivery is itself a design choice. A document that is emailed to the sponsor with a summary cover note will be read by the sponsor but may not reach the operational leaders who need it. A document that is presented in a single executive meeting will produce strong immediate uptake and weak durability as the meeting’s memory fades. A document that sits on a shared drive with no active introduction will be ignored.
Three delivery disciplines produce lasting uptake. The first is the executive presentation — a sixty-to-ninety-minute session with the accountable executive and the top two or three leaders from each function affected. The presentation walks through the executive summary, the ten-dimension target-state highlights, the top risks, and the first-year priorities. The executive and the function leaders leave the meeting with shared understanding of the design. The second is the operational roll-out — a series of meetings with the roles named in Section 7, walking each through the sections of the Blueprint that affect them. The CoE director walks through Sections 3 and 4 in the CoE context, the head of AI governance walks through Sections 5 and 7 in their context, the finance leader walks through Section 8. Each operational leader leaves with a working document they will execute against. The third is the reference accessibility — the Blueprint is published in a location the organization’s knowledge management already uses, with the executive summary, dimension sections, and appendices accessible to anyone with legitimate need. Reference accessibility is what turns the Blueprint from a delivery event into an ongoing institutional asset.
Blueprint lifecycle
Over the operating model’s multi-year life, the Blueprint itself has a lifecycle. Version 1.0 is the initial Blueprint produced at the end of the specialist’s engagement. Version 1.x versions (1.1, 1.2) capture the natural evolution within the first year — corrections to misunderstandings surfaced during deployment, small design refinements produced by the first quarter’s operational evidence, typographical and structural improvements. Version 2.0 is the first annual review’s output — substantial update to the target-state, transition plan, and measurement sections based on the year’s evidence. Version 2.x and beyond continue the pattern.
The lifecycle discipline matters because it determines whether the Blueprint stays current. Organizations that treat the Blueprint as a one-time artifact produce a document that goes stale within eighteen months. Organizations that commit to version updates — publishing them, reviewing them in the annual cadence, using them as the basis for the following year’s operational plan — produce a living document that carries the operating model’s institutional memory across leadership transitions.
The specialist who produces the first Blueprint is not always the one who produces subsequent versions. The version-1.0 Blueprint should be structured and documented well enough that a successor can pick up the document, understand its design logic, and produce version 2.0 without losing the original design’s coherence. Specialists who build for this succession produce better first Blueprints than specialists who build only for their own immediate delivery. The discipline is another instance of the same principle that runs through the whole credential: design for durability.
Summary
The Operating Model Blueprint assembles all ten dimensions into a single executive-grade document designed for three audiences: executive sponsors, operational leaders, and reference-using practitioners. Nine sections — executive summary, current state, target state, transition plan, risks, governance cadence, roles, measurement, appendices — cover the Blueprint’s scope. The governance cadence in particular is the discipline that turns the Blueprint from a one-time deliverable into a living structure that survives leadership transitions. The AITM-OMR credential has walked the learner through the specialist’s full craft: the archetype choice, the capability map, the CoE design, the decision-rights architecture, the funding model, the talent model, the integration plan, the maturity and evolution discipline, and the Blueprint that assembles them all. The specialist’s work is not the document but the durable capability the document describes.
Cross-references to the COMPEL Core Stream:
EATF-Level-1/M1.2-Art17-AI-Operating-Model-Blueprint.md— Core Stream primary article on the AI Operating Model Blueprint artifactEATP-Level-2/M2.4-Art02-Multi-Workstream-Coordination.md— Practitioner-depth treatment of multi-workstream coordination, which the Blueprint governance cadence operationalizes
Q-RUBRIC self-score: 90/100
© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.
Footnotes
-
MIT Sloan Management Review, “Building the AI-Powered Organization” research series, https://sloanreview.mit.edu/ (accessed 2026-04-19). ↩