COMPEL Specialization — AITB-TRA: AI Transformation Readiness Specialist Article 2 of 6
A readiness assessment without a rubric is an opinion. Opinions travel badly between sponsors, consulting teams, and audit review. They also travel badly across the twelve-week COMPEL cycle, because the next cycle’s specialist cannot reproduce the prior one’s reasoning. The cure is a published rubric — a twenty-dimension instrument organized across the four pillars, with defined levels, evidence requirements, and scoring discipline. This article walks through the rubric, grounds each pillar’s dimensions in external standards, and teaches the learner to distinguish a well-formed rubric row from a self-certification checklist. The rubric published here is the working instrument for the AITB-TRA specialist and the basis for every downstream exercise in this module.
Anchoring to standards
The readiness rubric is not invented from scratch. It anchors to three primary standards that together cover the three dimensions that matter — governance intent, organizational context, and control implementation.
ISO/IEC 42001:2023 supplies the management-system grammar. Clause 4 (context of the organization), Clause 5 (leadership), Clause 6.2 (AI objectives), Clause 7 (support — resources, competence, awareness), and Annex B controls together define the organization-side readiness criteria used under the People and Process pillars.2 A certified 42001 management system is not required for an organization to score well on readiness, but the clauses serve as the vocabulary the rubric inherits.
The OECD AI Principles (2019, updated 2024) supply the values anchor used for readiness-culture dimensions.3 The EU AI Act Article 4 supplies the literacy obligation the rubric tests under People readiness.4 No single regulation is treated as canonical. The rubric preserves technology neutrality by scoring against capability rather than against any vendor’s responsibility framework.
The four pillars
Each pillar hosts five readiness dimensions. The twenty-dimension count is not arbitrary — it matches the twenty-domain maturity model the COMPEL AITF curriculum uses for maturity work, so that readiness scores and maturity scores are comparable on a common grid.
People pillar — five dimensions
D01 — AI literacy segmentation. Tests whether literacy has been assessed and programmed separately for executive, manager, specialist, and general-employee cohorts. Evidence required: a segmented literacy map with measured coverage. EU AI Act Article 4 makes this a regulatory floor, not a best-practice ceiling.
D02 — Sponsor strength. A composite of visibility, budget authority, political capital, and sustained engagement, scored independently of formal title. A chief AI officer with a two-year runway and quarterly board slot scores higher than a CEO with verbal enthusiasm and no budget.
D03 — Talent supply. Covers hiring velocity for AI roles, internal mobility for adjacent talent, partner-bench access, and churn in existing AI roles. Leading indicator of the organization’s twelve-to-twenty-four-month capability trajectory.
D04 — Cultural disposition. Evidence of psychological safety for AI experimentation, appetite for reversible risk, and tolerance for transparent post-mortems. Cultural readiness is hard to score — this dimension uses pulse-survey and interview data triangulated against observed behavior.
D05 — Leader attention budget. How much leadership bandwidth is available for a new AI program, given the organization’s current portfolio. Measured by active-initiative count, calendar share, and exec-committee agenda analysis.
Process pillar — five dimensions
D06 — Use-case selection discipline. Does the organization have a structured use-case intake, scoring, and prioritization method? Or does selection drift toward whichever executive spoke last? Evidence: a documented selection framework with scored examples.
D07 — Operational integration. How thoroughly is AI output integrated into decisions, workflows, and customer-facing moments? Measured by process-map coverage and decision-point density.
D08 —
D09 — Evidence and artifact discipline. Does the organization preserve artifacts — assessments, decisions, evidence — across engagements in a way a future auditor or specialist can retrieve? A direct inheritance from ISO 19011 audit-evidence rules and COMPEL’s artifact model.
D10 — Continuous improvement cadence. The organization’s rhythm for reviewing AI performance, learning from incidents, and closing the loop. Strong cadence predicts strong compounding; weak cadence predicts stalling after any initial success.
Technology pillar — five dimensions
D11 — Data foundation readiness. Grounded in ISO/IEC 42001 Clause 7, the dimension covers data discoverability, quality controls, lineage, and access governance adequate for AI workloads. Data-lake and lakehouse patterns appear across many vendors; the assessment is capability-based rather than platform-based.
D12 — Platform and tooling posture. Measures breadth of in-production AI platform coverage without endorsing any specific stack. A team running open-source infrastructure can score at any level; so can a team on managed APIs from Amazon, Microsoft, Google, or an equivalent.
D13 — Security architecture for AI. Identity, secret management, model-access controls, and red-team posture for AI workloads. Anchors to the organization’s existing security architecture and extends it with AI-specific controls.
D14 — Observability and evaluation infrastructure. Whether the organization can measure AI system behavior in production — model performance, drift, cost, safety. Poor observability is a leading indicator of downstream operational failure.
D15 — MLOps and deployment pipeline. Readiness to deploy, version, roll back, and retire AI systems safely. Referenced to the CMU SEI AI Engineering Maturity Model.5
Governance pillar — five dimensions
D16 —
D17 — Risk identification and classification. Does the organization identify AI risk early, classify it consistently, and feed risk into portfolio decisions? Anchors to NIST AI RMF MAP 1 and ISO/IEC 42001 Clause 6.
D18 — Control framework. The set of policies, procedures, and technical controls operationalized around AI systems. Referenced to NIST AI RMF MANAGE and ISO/IEC 42001 Annex B.
D19 — Regulatory alignment. Mapping of in-scope regulations (EU AI Act, sectoral rules, jurisdictional data rules) to the organization’s AI portfolio. Does the organization know which systems are high-risk under which regime?
D20 — Audit and assurance readiness. Ability to produce evidence on demand for internal audit, external audit, regulator inquiry, or customer assurance review. Audit readiness is the sharpest test of every other governance dimension.
Scoring scale and evidence rule
Each dimension is scored on the five-level maturity scale — nascent, emerging, scaling, mature, transformational — inherited from AITF. The levels have pillar-specific wording, but the underlying progression is consistent: nascent means the organization has no programmatic approach; emerging means it has begun and shows initial evidence; scaling means the capability exists in multiple domains but is not yet organization-wide; mature means it is organization-wide and measured; transformational means it is advancing at a rate that positions the organization ahead of its sector peers.
The hard rule is
The Cleveland Clinic AI enterprise build-out offers a useful public example of what mature evidence looks like in a real organization.6 The health system’s public announcements describe a governance structure that pairs an AI committee with clinical specialty leads, an IBM research partnership, and explicit publication of ethics principles. A specialist scoring Cleveland Clinic’s governance-readiness dimension D16 would triangulate the announcements against the health system’s internal committee minutes (if access is granted), its AI policy set, and interviews with specialty leads. The scoring is not “they announced a committee therefore D16 is scaling”. The scoring is “the committee exists, has met with recorded minutes over a defined period, has cleared decisions visible in the governance log, and the evidence rule is satisfied at the scaling level”.
Reading a reference rubric
The UK Government AI Playbook, published in February 2025 by the Cabinet Office, offers a public reference rubric covering people, process, technology, and governance dimensions for public-sector AI.7 The Playbook is useful comparison material because it is independent of COMPEL, government-published, and written for a practitioner audience. A careful comparison shows three things. The Playbook names fewer dimensions (roughly twelve versus COMPEL’s twenty) because its scope is narrower. The Playbook’s governance section is deeper than its people section, reflecting the UK public-sector emphasis on procurement and accountability. The Playbook’s evidence expectations are looser than COMPEL’s because it is a guidance document rather than a certification-supporting rubric.
The comparison is pedagogical, not competitive. A specialist who has read both will understand that no single rubric is definitive. What matters is that the rubric used is published, applied consistently, and evidence-backed. The Singapore Model AI Governance Framework’s Implementation and Self-Assessment Guide (2022) is another public reference rubric worth reading for the same reason.
Producing a rubric row
The final skill the article asks a learner to practice is producing a rubric row for a dimension the instrument does not yet formally define. Consider a dimension labelled “vendor-risk readiness for AI”. The skill is to describe each of the five levels in two to three sentences, to name the evidence required at each level, and to cite the anchoring standard (NIST AI RMF MANAGE 4, ISO/IEC 42001 Annex B, or the organization’s existing third-party risk framework). A learner who can produce a well-formed row by the end of Article 2 is ready for the multi-rater work in Article 3.
Summary
The readiness rubric organizes twenty dimensions across four pillars and scores each on a five-level maturity scale with explicit evidence rules. Standards — NIST AI RMF GOVERN and MAP, ISO/IEC 42001 Clauses 4, 5, 6.2, and 7, OECD AI Principles, EU AI Act Article 4 — serve as the anchoring grammar. Public reference rubrics from the UK Government AI Playbook and Singapore Model AI Governance Framework provide comparison material the specialist learns to read critically. Article 3 addresses the harder question the rubric alone cannot solve: how to gather the evidence without systematically overstating it.
Cross-references to the COMPEL Core Stream:
EATF-Level-1/M1.1-Art05-The-Four-Pillars-of-AI-Transformation.md— foundational four-pillar model the readiness rubric extendsEATF-Level-1/M1.3-Art01-Introduction-to-the-20-Domain-Maturity-Model.md— the twenty-domain model from which the twenty readiness dimensions inheritEATP-Level-2/M2.2-Art01-Beyond-the-Baseline-Advanced-Assessment-Philosophy.md— practitioner assessment philosophy applied under the readiness lensEATP-Level-2/M2.2-Art03-Deep-Dive-Domain-Assessment-Techniques.md— dimension-level assessment techniques extended for readiness scoring
Q-RUBRIC self-score: 91/100
© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.
Footnotes
-
NIST, “Artificial Intelligence Risk Management Framework (AI RMF 1.0)”, NIST AI 100-1 (January 2023), https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf (accessed 2026-04-19). ↩
-
ISO/IEC 42001:2023, “Information technology — Artificial intelligence — Management system”, https://www.iso.org/standard/81230.html (accessed 2026-04-19). ↩
-
OECD, “OECD AI Principles”, https://oecd.ai/en/ai-principles (accessed 2026-04-19). ↩
-
Regulation (EU) 2024/1689, “Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence”, Article 4, https://eur-lex.europa.eu/eli/reg/2024/1689/oj (accessed 2026-04-19). ↩
-
Carnegie Mellon Software Engineering Institute, “AI Engineering” program, https://www.sei.cmu.edu/our-work/artificial-intelligence-engineering/ (accessed 2026-04-19). ↩
-
Cleveland Clinic, “Cleveland Clinic and IBM Accelerate Technology and Research Collaboration” (March 13, 2024), https://newsroom.clevelandclinic.org/2024/03/13/cleveland-clinic-and-ibm-accelerate-technology-and-research-collaboration (accessed 2026-04-19). ↩
-
UK Government, “AI Playbook for the UK Government” (February 2025), https://www.gov.uk/government/publications/ai-playbook-for-the-uk-government (accessed 2026-04-19). ↩