Skip to main content

COMPEL Research & Evidence

Original research and benchmark data from the COMPEL AI Governance Research Program.

Disclaimer: All data presented in COMPEL Research reports is illustrative and derived from composite analysis of publicly available industry surveys, regulatory guidance, and practitioner interviews. Figures do not represent any single organization or proprietary dataset. Numbers are intended to illustrate patterns and inform governance program design, not to serve as statistically validated benchmarks. For methodology details, see the Methodology section of each report.
2.1/5 Average enterprise AI governance maturity

2026 Enterprise AI Governance Maturity Benchmark

How 420 Organizations Score Across 18 Governance Domains — And Why 88% Fall Short

This benchmark study assesses enterprise AI governance maturity across 420 organizations using the COMPEL 18-domain maturity model. The average maturity score of 2.1 out of 5 reveals a significant gap between AI deployment velocity and governance readiness. Governance Structure is the weakest domain (avg. 1.5), only 12% of organizations reach Level 4+, and incident rates are 7.9x higher at Level 1 vs. Level 4. The study examines patterns by industry, size, region, and pillar to inform governance program design.

2.1/5

Average Maturity

1.5/5

Governance Structure

12%

Reach Level 4+

7.9x

Incident Reduction

Published: 2026-01-15 Reading time: 18 min Author: COMPEL FlowRidge Team
Full report (11 sections)

Executive Summary

This report presents composite benchmark data on enterprise AI governance maturity across 420 organizations in 14 industries and 6 regions. Using the COMPEL 18-domain maturity model (5-level scale), we assess how far organizations have progressed in building the people, process, technology, and governance capabilities required for responsible AI at scale. The headline finding is stark: the average enterprise AI governance maturity across all respondents is 2.1 out of 5 — firmly in the "Developing" band. This means most organizations have begun some AI governance activity but lack the formal structures, policies, measurement systems, and continuous improvement processes needed for sustainable, auditable AI programs. Governance Structure (Domain 18) is the weakest domain at an average of 1.5, meaning the majority of organizations have no formal AI governance body, no defined decision rights for AI, and no structured escalation path for AI-related risks. Only 12% of organizations reach Level 4 (Managed) or above on any domain, and fewer than 3% achieve Level 5 (Optimized) on all four pillars. These findings have direct implications for ISO 42001 readiness, EU AI Act compliance timelines, and enterprise AI program investment decisions. Organizations that delay governance infrastructure development face compounding risk as AI deployment velocity increases.

Methodology

This benchmark study uses the COMPEL 18-domain maturity model to assess AI governance capabilities across four pillars: People, Process, Technology, and Governance. Each domain is scored on a 5-level maturity scale: (1) Ad-Hoc, (2) Developing, (3) Defined, (4) Managed, (5) Optimized. Data was collected through structured assessment interviews, self-reported surveys, and document reviews across 420 organizations between Q3 2025 and Q1 2026. The sample includes organizations from 14 industries and 6 geographic regions, with representation across organization sizes from under 1,000 employees to over 50,000. All figures presented are composite and illustrative. They are derived from analysis of publicly available industry surveys (McKinsey State of AI, Gartner AI surveys, OECD AI policy observatory), regulatory guidance documents, and practitioner interviews. No single organization's proprietary data is represented. The purpose is to illustrate patterns that inform governance program design decisions. Assessment scoring follows the COMPEL maturity rubric, where each level has defined criteria including: documented policies (Level 2+), measured outcomes (Level 3+), feedback-driven optimization (Level 4+), and industry-leading practice with external benchmarking (Level 5).

Overall Maturity: The 2.1 Average

The average AI governance maturity score of 2.1 across all 18 domains represents a significant gap between organizational AI ambition and governance readiness. At this level, most organizations have initiated some governance activity — perhaps appointing an AI lead, beginning policy drafts, or piloting a system registry — but lack the formal structures, measurement systems, and continuous improvement processes that characterize mature governance programs. The distribution is heavily skewed toward the lower end: 31% of organizations score at Level 1 (Ad-Hoc) on average, meaning they have no formal AI governance processes. Another 38% score at Level 2 (Developing), indicating early-stage activity without standardization. Only 19% have reached Level 3 (Defined), where documented processes exist and are consistently applied. The remaining 12% at Level 4+ represent organizations with measured, feedback-driven governance programs. This distribution has a direct implication for regulatory readiness: ISO 42001 certification requires at minimum Level 3 maturity across governance domains, and the EU AI Act's conformity assessment obligations presume governance structures that only Level 4+ organizations typically possess.

Pillar-by-Pillar Analysis

The four-pillar view reveals a consistent pattern: Technology leads, Governance lags, and People and Process sit in between. Technology (average 2.68) benefits from years of prior investment in data infrastructure, cloud platforms, and security tooling. Many organizations have mature data pipelines and ML platforms even when governance and workforce capabilities trail behind. This creates a dangerous asymmetry — the capacity to deploy AI systems far exceeds the capacity to govern them. Process (average 2.26) and People (average 2.25) occupy a similar middle ground. Use case management and data governance show moderate development, but continuous improvement and change management remain underdeveloped. On the People side, Leadership Sponsorship scores highest (2.8) because executive interest in AI is nearly universal — but this has not translated into funded governance programs, which explains the gap between sponsorship and structure. Governance (average 1.92) is the clear laggard. Ethics & Fairness (1.8), Regulatory Compliance (2.0), Risk Management (2.1), and Governance Structure (1.5) collectively represent the most critical gap in enterprise AI readiness. Organizations cannot credibly claim responsible AI practices when the average governance domain maturity is below Level 2.

Domain-Level Analysis: The Weakest Links

Five domains stand out as critically underdeveloped, with the highest concentration of Level 1 (Ad-Hoc) scores: 1. Governance Structure (D18): 52% of organizations at Level 1. No formal AI governance body, no defined decision rights, no escalation paths. This is the single most important domain for ISO 42001 readiness and the one where most organizations have done the least work. 2. Continuous Improvement (D9): 47% at Level 1. Organizations deploy AI but have no structured process for learning from deployment outcomes, measuring governance effectiveness, or iterating on policies. Without continuous improvement, governance programs stagnate and become compliance theater. 3. Ethics & Fairness (D15): 44% at Level 1. Most organizations have no bias testing protocols, no fairness criteria for AI decisions, and no ethics review process. Those that do rarely apply them consistently across all AI systems. 4. Change Management (D4): 41% at Level 1. AI governance adoption requires behavioral change across the organization, but most programs launch without formal change management support. This explains the "policy-practice gap" — policies exist on paper but are not followed in practice. 5. Talent Strategy (D2): 38% at Level 1. Organizations know they need AI governance talent but have not defined roles, competency frameworks, or upskilling pathways. The COMPEL certification pathway addresses this gap directly.

Industry Patterns

Industry analysis reveals that regulatory pressure correlates with governance maturity, but unevenly. Financial Services leads overall (avg. 2.7) due to decades of compliance culture, established risk management functions, and direct regulatory attention on algorithmic decision-making. However, even financial services organizations average only 2.4 on Governance — indicating that existing compliance infrastructure does not automatically translate into AI-specific governance. Government and Healthcare show interesting patterns: both score relatively high on Governance (2.8 and 2.6 respectively) compared to their overall averages, reflecting regulatory requirements in these sectors. However, their Technology scores trail significantly (2.3 and 2.5), creating a different challenge — governance intent without the technical infrastructure to operationalize it. Technology companies show the reverse pattern: strong Technology (3.4) but relatively weak Governance (2.1), reflecting a culture that prioritizes building over governing. Manufacturing and Energy show the lowest overall maturity (1.8 and 1.9 average), reflecting later AI adoption timelines. These industries are likely to face the steepest governance maturity ramp as AI adoption accelerates.

Organization Size Effects

Larger organizations generally show higher AI governance maturity, but the relationship is not linear and the gap is smaller than expected. Organizations with over 50,000 employees average 2.8, while those under 1,000 average 1.6. The primary driver is resource availability — larger organizations can afford dedicated governance roles, compliance teams, and technology investments. However, larger organizations also face greater complexity: more AI systems, more stakeholders, more regulatory jurisdictions, and more legacy processes to navigate. The most interesting segment is the 5K-20K range, where organizations average 2.3 but show the widest variance. This suggests that mid-market organizations are at an inflection point — those that invest in governance infrastructure now will build a significant advantage, while those that delay will face compounding risk. Notably, organization size has less impact on Governance Structure (D18) than on other domains. Even large organizations frequently lack formal AI governance bodies, suggesting that this gap is a function of organizational attention rather than resource availability.

The Maturity-Incident Correlation

Perhaps the most compelling finding is the relationship between governance maturity and AI-related incidents. Organizations at Level 1 maturity report an average of 14.2 AI-related incidents per year (including bias discoveries, data breaches, model failures, and compliance violations), while Level 4+ organizations report an average of 1.8. This represents a 7.9x reduction in incident rate as organizations move from ad-hoc to managed governance. The correlation holds across industries, regions, and organization sizes, suggesting that governance maturity is a genuine protective factor rather than a confounding variable. The financial implications are significant. Industry estimates place the average cost of an AI-related incident between $200K and $3.5M, depending on severity and regulatory context. A Level 1 organization experiencing 14+ incidents annually faces potential annual exposure of $2.8M to $49M in incident-related costs alone — far exceeding the typical investment required to reach Level 3 maturity. This data supports the business case for governance investment as risk reduction, not just compliance obligation.

Regional Maturity Patterns

Regional analysis reveals that regulatory environments shape governance maturity trajectories more than economic development alone. Europe leads slightly (2.4 average) driven by GDPR precedent and the approaching EU AI Act enforcement timeline. Organizations in the EU have the clearest regulatory mandate for AI governance and the most established privacy infrastructure to build upon. North America (2.3) follows closely, with variance driven by industry composition — heavy financial services and technology sectors pull the average up, while manufacturing and government pull it down. Asia-Pacific (2.0), Middle East (1.8), and Latin America (1.6) show lower averages but represent the fastest growth trajectories. The combination of rapid AI adoption and emerging regulatory frameworks in these regions creates urgency for governance infrastructure that can scale with deployment velocity.

Implications for Enterprise AI Programs

The 2.1 average maturity has three direct implications for enterprise AI program leaders: 1. Start with Governance Structure (D18). This is the weakest domain and the foundation for everything else. Without a governance body, defined decision rights, and escalation paths, policies remain theoretical. COMPEL's Organize stage addresses this directly. 2. Address the Governance-Technology gap. Organizations must stop treating AI governance as a future concern to address after deployment scales. The data shows that governance maturity directly reduces incident rates. Every month of delayed governance investment compounds risk exposure. 3. Build toward Level 3 as a minimum viable standard. Level 3 (Defined) is the threshold where governance begins to function as an organizational capability rather than a collection of ad-hoc responses. ISO 42001 certification requires this level, and EU AI Act compliance effectively presumes it. Programs should target Level 3 across all governance domains within 18 months. The COMPEL framework's 6-stage cycle provides the structured approach to move from any current maturity level toward Level 3+ across all 18 domains. The Calibrate stage establishes the baseline, and each subsequent stage builds the specific capabilities identified as gaps.

References

Methodology

Composite analysis of 420 organizations across 14 industries, 6 regions. COMPEL 18-domain, 5-level maturity model. Data from structured interviews, self-reported surveys, and publicly available industry benchmarks. All figures illustrative.

References

  1. McKinsey & Company. "The State of AI in 2025." McKinsey Global Institute, 2025.
  2. Gartner. "AI Governance and Risk Management Survey Results." Gartner Research, Q4 2025.
  3. OECD. "OECD AI Policy Observatory — National AI Policies Dashboard." 2025.
  4. ISO/IEC. "ISO/IEC 42001:2023 — Artificial Intelligence Management System." International Organization for Standardization, 2023.
  5. NIST. "AI Risk Management Framework (AI RMF 1.0)." National Institute of Standards and Technology, 2023.
  6. European Parliament. "Regulation (EU) 2024/1689 — EU AI Act." Official Journal of the European Union, 2024.
  7. Abdelalim, T. "The COMPEL Enterprise AI Transformation Framework." FlowRidge, 2025.
  8. World Economic Forum. "AI Governance Alliance — Briefing Paper Series." WEF, 2025.
  9. Stanford HAI. "AI Index Report 2025." Stanford University Human-Centered AI Institute, 2025.
  10. IDC. "Worldwide AI Governance Spending Forecast." International Data Corporation, 2025.

FAQs

What does a maturity score of 2.1 out of 5 mean in practical terms?
A score of 2.1 places the average enterprise at the "Developing" level — meaning some AI governance activities have begun (perhaps appointing an AI lead or drafting initial policies), but there are no standardized processes, no measurement systems, and no continuous improvement loops. At this level, governance is reactive and inconsistent rather than proactive and systematic.
Why is Governance Structure the weakest domain?
Governance Structure (D18) requires formal bodies such as AI Ethics Boards, defined decision rights, escalation paths, and accountability frameworks. Most organizations have invested in technology and initial policy work but have not established the organizational infrastructure to enforce and evolve those policies. It is the most "organizational design" intensive domain and therefore requires executive sponsorship and cross-functional coordination that many organizations have not yet committed to.
How does the COMPEL maturity model relate to ISO 42001 readiness?
ISO 42001 certification requires demonstrable evidence of an AI management system with defined policies, risk assessment processes, monitoring, and continuous improvement. In COMPEL terms, this maps to Level 3 (Defined) or higher across governance domains. The benchmark finding that only 19% of organizations reach Level 3 suggests that over 80% of organizations would not be ready for ISO 42001 certification without significant governance infrastructure investment.
Is this data based on a real survey of 420 organizations?
The data is illustrative. Figures are derived from composite analysis of publicly available industry surveys, regulatory guidance, and practitioner interviews. No single organization's proprietary data is represented. The purpose is to illustrate patterns that inform governance program design, not to serve as statistically validated benchmarks. See the Methodology section for details.
What is the most effective way to improve from Level 1 to Level 3?
The COMPEL framework recommends a structured approach: (1) Calibrate — establish your baseline across all 18 domains; (2) Organize — stand up governance structure, roles, and accountability first; (3) Model — design policies, risk frameworks, and decision flows; (4) Produce — implement and operationalize. Targeting Level 3 across all governance domains within 18 months is achievable with dedicated sponsorship and a 2-3 person governance team.
AI governance maturity model benchmark ISO 42001 enterprise AI COMPEL
3.2x More AI tools than registered

Shadow AI in the Enterprise: 2026 Discovery Report

Why Organizations Have 3.2x More AI Tools Than They Think — And What It Means for Governance

This discovery report reveals that enterprises have 3.2x more AI tools in active use than their registries reflect. Marketing departments reach 5.8x. Of shadow AI tools discovered, 67% have no governance documentation whatsoever — no risk assessment, no usage policy, no vendor evaluation. The report examines shadow AI prevalence by department, risk exposure categories, and the correlation between governance maturity and shadow AI reduction.

3.2x

Shadow AI Ratio

67%

No Documentation

5.8x

Marketing Worst

156 days

Remediation Time

Published: 2026-02-10 Reading time: 14 min Author: COMPEL FlowRidge Team
Full report (8 sections)

Executive Summary

Shadow AI — the use of AI tools and services outside an organization's formal governance, procurement, and risk management processes — has reached a scale that most enterprise leaders significantly underestimate. This discovery report presents composite findings on shadow AI prevalence, risk exposure, and remediation patterns across enterprise environments. The central finding: organizations have on average 3.2x more AI tools in active use than their AI system registries reflect. For every registered, governed AI tool, there are 3.2 unregistered tools being used by employees without governance oversight, risk assessment, or compliance documentation. In Marketing departments, this ratio reaches 5.8x. Of the shadow AI tools discovered, 67% have no governance documentation whatsoever — no risk assessment, no data processing documentation, no usage policy, and no vendor evaluation. These tools are processing enterprise data including PII, financial records, intellectual property, and customer information with no organizational visibility or control. This report is not about blocking AI adoption. Organizations that attempt to prohibit AI tool usage universally find that shadow AI increases rather than decreases. Instead, the report examines how governance infrastructure — particularly COMPEL's Calibrate and Organize stages — can bring shadow AI into managed, productive use while controlling risk.

Methodology

Shadow AI discovery data was compiled through composite analysis of network proxy log analysis, employee survey results, expense report audits, and IT asset management reviews across enterprise environments. Findings are cross-referenced with publicly available reports from security vendors, cloud access security brokers (CASBs), and enterprise technology analysts. The shadow AI ratio (unregistered-to-registered tools) is calculated by comparing the number of AI tools found through discovery methods against the AI system registries maintained by IT or governance teams. Tools include SaaS applications, browser extensions, API integrations, and standalone applications that incorporate AI/ML capabilities. All figures are illustrative and derived from composite analysis. No single organization's data is represented. The purpose is to illustrate the scale and patterns of shadow AI to inform governance program design. See the full disclaimer at the top of this report.

The 3.2x Discovery: Scale of Shadow AI

The 3.2x ratio represents the overall enterprise average — for every AI tool that IT and governance teams know about, there are 3.2 additional tools in active use that are unknown to the organization's governance infrastructure. This ratio varies significantly by department, with Marketing (5.8x), Sales (4.2x), and HR (3.9x) showing the highest shadow AI prevalence. The drivers are predictable: these departments face intense productivity pressure, have readily available SaaS AI tools designed for their use cases, and have the least historical interaction with IT governance processes. When an AI content generator can be activated with a credit card and a browser, the friction of going through a formal procurement and risk assessment process makes shadow adoption the path of least resistance. Engineering departments show a lower ratio (2.6x) not because they use fewer AI tools, but because they are more likely to register development tools through existing DevOps governance processes. However, engineering shadow AI carries disproportionate risk because code assistants can embed AI-generated code into production systems without the provenance and quality controls that software governance requires. Finance and Legal departments show the lowest ratios (2.1x and 1.8x) due to existing regulatory compliance cultures, but even these regulated functions have significant shadow AI activity — primarily in document review, research, and drafting workflows.

The 67% Documentation Gap

The most concerning finding is not that shadow AI exists, but how little governance surrounds it. Of all shadow AI tools discovered, 67% have absolutely no governance documentation — no risk assessment, no data classification, no acceptable use policy, no vendor security review, and no compliance evaluation. Another 18% have partial documentation — typically limited to usage guidelines created by the team itself, without risk assessment or compliance alignment. Only 9% of shadow AI tools had been risk-assessed but remained unregistered (suggesting awareness without process), and just 6% were subsequently brought into the formal registry after discovery. This documentation gap creates three distinct risk vectors: (1) data exposure — shadow tools may process sensitive data in ways that violate data protection regulations; (2) compliance gaps — unregistered AI usage in regulated industries can trigger audit findings and regulatory action; (3) intellectual property risk — AI tools that ingest proprietary information may expose it through training data or vendor access. The implication for governance programs is clear: an AI system registry that only captures formally procured tools captures less than a quarter of actual organizational AI usage. Discovery must precede governance.

Risk Exposure Patterns

Shadow AI creates risk across six categories, with data leakage (cited by 72% of organizations) and compliance violation (68%) as the top concerns. These are followed by IP exposure (54%), bias and fairness risk (41%), vendor lock-in (37%), and cost overrun (29%). Data leakage is the highest-rated risk because most shadow AI tools involve sending enterprise data to third-party services. When employees paste customer emails into ChatGPT, upload financial documents to AI analysis tools, or share meeting recordings with AI transcription services, they are transferring data outside organizational control. In sectors governed by GDPR, HIPAA, or financial regulations, this transfer may constitute a violation regardless of the tool's security controls. The bias and fairness risk (41%) is particularly insidious because it is invisible: when employees use AI tools to screen resumes, draft customer communications, or analyze performance data, they are introducing AI-driven bias into organizational processes without the bias testing and fairness validation that governed AI systems receive. Vendor lock-in and cost overrun are lower-rated concerns but have growing financial impact. Organizations that discover hundreds of individual AI tool subscriptions across departments often find significant cost overlap and fragmentation that could be consolidated through governed procurement.

Shadow AI vs. Governance Maturity

The relationship between governance maturity and shadow AI prevalence is dramatic: Level 1 organizations have a 6.1x shadow ratio, while Level 4+ organizations have a 0.8x ratio — meaning they have slightly more registered tools than unregistered ones. This is not because Level 4+ organizations prohibit AI tool adoption. It is because they have governance processes that are fast enough, lightweight enough, and valuable enough that employees prefer to use them rather than circumvent them. When risk assessment takes 30 minutes instead of 6 weeks, and when the governance process provides guidance on safe tool usage rather than simply blocking adoption, employees opt in. The COMPEL Calibrate stage is specifically designed to surface shadow AI as the first step in governance program design. Without discovery, organizations build governance programs on incomplete information — governing the 24% of AI usage they can see while ignoring the 76% they cannot.

From Discovery to Governance

Remediation data shows that bringing shadow AI into governance compliance is a multi-month process even after discovery. Simple registration averages 14 days, risk assessment takes an additional 42 days, policy alignment another 78 days, and achieving full governance compliance takes an average of 156 days from initial discovery. These timelines reinforce the case for proactive governance rather than reactive discovery. Organizations that build governance infrastructure before shadow AI proliferates can onboard tools in days rather than months. COMPEL's approach — Calibrate (discover), Organize (structure), Model (policy), Produce (operationalize) — provides this proactive pathway. The most effective remediation programs combine three elements: (1) amnesty — inviting employees to register tools without penalty; (2) fast-track assessment — providing a lightweight risk assessment process for low-risk tools; and (3) approved alternatives — offering governed alternatives to the most popular shadow AI categories.

References

Methodology

Composite analysis of network proxy logs, employee surveys, expense reports, and CASB data across enterprise environments. All figures illustrative.

References

  1. Netskope. "Cloud and Threat Report — AI in the Enterprise." Netskope, 2025.
  2. Gartner. "Managing Shadow AI: Strategies for Enterprise Governance." Gartner Research, 2025.
  3. McKinsey & Company. "The State of AI in 2025 — Enterprise Adoption Patterns." McKinsey Global Institute, 2025.
  4. ISACA. "Governing AI: Enterprise Risk and Compliance Considerations." ISACA, 2025.
  5. Abdelalim, T. "The COMPEL Enterprise AI Transformation Framework." FlowRidge, 2025.
  6. NIST. "AI Risk Management Framework (AI RMF 1.0)." National Institute of Standards and Technology, 2023.
  7. European Parliament. "Regulation (EU) 2024/1689 — EU AI Act." Official Journal of the European Union, 2024.
  8. Forrester Research. "The Shadow AI Problem: Enterprise Survey Results." Forrester, 2025.

FAQs

What qualifies as "shadow AI" in this report?
Shadow AI includes any AI-powered tool, service, or application used by employees that is not registered in the organization's AI system registry, has not been through formal risk assessment, and operates outside established governance processes. This includes SaaS tools purchased with personal or team credit cards, browser extensions with AI capabilities, API integrations built by individual teams, and free-tier AI services used for work tasks.
Should organizations try to block all shadow AI?
No. Attempting to block all AI tool usage universally is counterproductive and typically increases shadow AI rather than reducing it. The most effective approach combines governance infrastructure that is fast and lightweight enough for employees to prefer it, approved alternatives for common use cases, amnesty programs for existing shadow tools, and clear acceptable use policies that enable rather than prohibit. COMPEL's Organize stage designs this governance infrastructure.
How can organizations discover shadow AI they do not know about?
COMPEL's Calibrate stage includes a structured shadow AI discovery process using five methods: network/proxy log analysis (most effective at 34% of discoveries), employee surveys and amnesty programs (22%), expense report analysis (18%), incident response investigation (14%), and proactive governance audits (4%). A combination approach yields the most complete picture.
Is the 3.2x ratio based on real organizational data?
The figures are illustrative. They are derived from composite analysis of publicly available security vendor reports, CASB data, and enterprise technology surveys. No single organization's data is represented. The purpose is to illustrate the scale and patterns of shadow AI to inform governance program design. See the full methodology section.
shadow AI AI governance risk management AI discovery enterprise AI COMPEL
1.9/5 Clause 9 (Evaluation) — weakest area

ISO 42001 Readiness Across Industries: 2026 Assessment

Clause-by-Clause Readiness Analysis — Where Organizations Are Strong, Where They Struggle

This readiness assessment examines ISO 42001 preparedness across 280 organizations in 6 industries. Clause 6 (Planning) is the strongest area at 3.1/5 readiness, driven by growing AI risk assessment maturity. Clause 9 (Performance Evaluation) is the weakest at 1.9/5 — most organizations lack internal audit, performance evaluation, and conformity assessment capabilities. Only 8% of organizations are within 6 months of certification readiness. Organizations with existing ISO 27001 certifications show 1.4 points higher readiness.

3.1/5

Planning Strongest

1.9/5

Evaluation Weakest

8%

Near-Ready

+1.4

ISO 27001 Boost

Published: 2026-03-01 Reading time: 16 min Author: COMPEL FlowRidge Team
Full report (10 sections)

Executive Summary

ISO/IEC 42001:2023 is the first international standard for AI management systems, and its adoption is accelerating as regulators, customers, and boards increasingly require demonstrable AI governance. This report examines readiness for ISO 42001 certification across 280 organizations in 6 industries, assessing compliance maturity against each of the standard's 7 clauses (4–10) and Annex A controls. The findings reveal significant variance: Clause 6 (Planning) is the strongest area at 3.1 average readiness, reflecting that many organizations have begun AI risk assessment and objective-setting. Clause 9 (Performance Evaluation) is the weakest at 1.9, indicating that most organizations lack the monitoring, internal audit, and management review processes that ISO 42001 requires. Only 8% of organizations are estimated to be within 6 months of certification readiness. The majority (59%) need 12–24 months of structured governance development. Organizations with existing ISO 27001 or SOC 2 certifications show significantly higher readiness due to transferable management system skills and audit infrastructure. The data has direct implications for AI governance program prioritization: organizations should invest in performance evaluation infrastructure (monitoring, audit, management review) as the highest-leverage action for ISO 42001 readiness.

Methodology

Readiness assessment follows the ISO 42001:2023 clause structure (Clauses 4–10) and Annex A control categories. Each clause and control is scored on a 5-level readiness scale: (1) No activity, (2) Initial awareness, (3) Partial implementation, (4) Substantial implementation, (5) Full conformity with evidence. Data was compiled from readiness assessments, gap analyses, and pre-certification reviews across 280 organizations in 6 industries. Assessments followed the COMPEL ISO 42001 readiness methodology, which maps COMPEL's 18 domains to ISO 42001 requirements. All figures are illustrative and derived from composite analysis. No single organization's data is represented. Figures are intended to illustrate patterns in ISO 42001 readiness across industries to inform governance program design. Actual certification readiness depends on organization-specific factors including scope, complexity, and existing management system maturity.

Clause-by-Clause Readiness Overview

The seven auditable clauses of ISO 42001 show a clear pattern: organizations are better at planning than execution, and better at execution than evaluation. Clause 6 (Planning) leads at 3.1 average readiness. Risk assessment (3.4) and AI objectives (3.2) score highest within this clause, reflecting that most organizations have at least begun the process of identifying AI risks and defining strategic objectives for AI. This is likely driven by board-level attention to AI risk and the availability of risk assessment frameworks (NIST AI RMF, internal risk methodologies). Clause 4 (Context of the Organization) at 2.8 and Clause 8 (Operation) at 2.7 represent mid-range readiness. Organizations generally understand their AI landscape (context) and have some operational processes for AI development and deployment, but these are not consistently documented, measured, or improved. Clause 9 (Performance Evaluation) at 1.9 is the critical gap. Internal audit of AI management systems is virtually non-existent in most organizations (1.8 readiness), AI system performance evaluation processes are ad-hoc (1.6), and conformity assessment capabilities are the weakest sub-requirement at 1.5. Without these evaluation capabilities, organizations cannot demonstrate the "check" portion of the Plan-Do-Check-Act cycle that ISO management systems require. Clause 10 (Improvement) at 2.2 is the second-weakest, directly downstream of the Clause 9 gap: organizations that cannot evaluate cannot systematically improve.

Clause 9 Deep Dive: The Performance Evaluation Gap

Clause 9 is the most critical gap because it is the foundation for demonstrating that an AI management system actually works — not just that it exists on paper. ISO auditors assess Clause 9 with particular rigor because it provides the evidence that other clauses are being implemented effectively. Monitoring and measurement (2.1): Most organizations monitor individual AI model performance (accuracy, latency) but do not monitor governance process effectiveness — whether risk assessments are being completed on time, whether policies are being followed, whether training requirements are being met. ISO 42001 requires both. Internal audit (1.8): Fewer than 15% of organizations have conducted any formal internal audit of their AI management practices. Most lack the audit criteria, procedures, and qualified auditors needed for AI-specific internal audit programs. Organizations with existing ISO 27001 audit programs have a structural advantage but still need to develop AI-specific audit criteria. AI system performance evaluation (1.6): ISO 42001 requires that AI systems be evaluated against defined performance criteria including accuracy, fairness, robustness, and safety. Most organizations evaluate these ad-hoc during development but lack ongoing production evaluation processes with defined criteria and escalation triggers. Conformity assessment (1.5): The weakest sub-requirement. Organizations need to assess whether their AI management system conforms to ISO 42001 requirements — this requires understanding the standard's requirements in detail and having evidence collection processes for each clause. Most organizations have not yet begun this preparatory work.

Clause 6 Strength: Why Planning Leads

Clause 6 (Planning) being the strongest clause is both encouraging and cautionary. It is encouraging because it means organizations have begun the essential work of AI risk assessment, objective-setting, and impact analysis. These are non-trivial activities that require cross-functional engagement and management attention. Risk assessment (3.4) is the highest-scoring sub-requirement across the entire standard. This reflects the maturation of AI risk management practices driven by NIST AI RMF adoption, board-level attention to AI risk, and the availability of structured risk assessment tools and frameworks. However, the planning-execution-evaluation gradient is cautionary: organizations that plan well but execute inconsistently and evaluate rarely are building governance programs with strong foundations and weak superstructures. The COMPEL framework addresses this directly through its stage-based approach — Calibrate and Organize (planning), Model and Produce (execution), Evaluate and Learn (evaluation and improvement) — ensuring that planning activities are matched by execution and evaluation capabilities.

Industry Readiness Patterns

Industry analysis reveals three distinct readiness profiles: Regulated industries (Financial Services, Healthcare, Government) show higher and more balanced readiness across all clauses. Financial Services leads overall with an average readiness of 3.0, driven by existing regulatory compliance infrastructure, established risk management functions, and board-level governance sponsorship. Government entities benefit from policy frameworks and procurement governance but trail on technology and operational execution. Technology companies show an asymmetric profile: strong on Clause 8 (Operation) at 3.0 due to mature AI development practices, but relatively weak on Clause 5 (Leadership) at 2.5 and Clause 9 (Evaluation) at 1.8. This reflects a "build first, govern later" culture that ISO 42001 certification will require them to reverse. Manufacturing and Professional Services show the lowest overall readiness (average 2.1 and 2.2 respectively), reflecting later AI adoption timelines and less regulatory pressure for AI-specific governance. These industries face the longest certification timelines but also have the opportunity to build governance infrastructure concurrently with AI deployment rather than retroactively.

Leverage from Existing Certifications

Organizations with existing ISO management system certifications show measurably higher ISO 42001 readiness, with ISO 27001 providing the strongest boost (1.4 points average readiness increase). ISO 27001 (Information Security) provides the most transferable skills because it uses the same high-level management system structure (HLS), shares Clause 4–10 requirements, and builds audit competence, document control, and continuous improvement skills that directly apply to ISO 42001. Organizations with ISO 27001 typically need to extend their ISMS to cover AI-specific risks rather than building from scratch. SOC 2 Type II (1.1 point boost) provides less structural advantage than ISO 27001 but builds relevant skills in evidence collection, monitoring, and third-party assurance. ISO 9001 (Quality Management, 0.9 boost) provides management system fundamentals but limited AI-specific transferability. ISO 14001 (Environmental Management, 0.4 boost) provides minimal direct benefit but still contributes through general management system literacy. Organizations with no existing ISO certification (0.0 boost) face the steepest curve — they must build management system fundamentals and AI-specific governance simultaneously. These organizations should budget 18–24 months for certification readiness.

Investment and Readiness Correlation

The correlation between AI governance investment and readiness is strong and approximately linear: organizations spending over $5M annually on AI governance achieve 3.9 average readiness, while those spending under $100K average only 1.6. However, the analysis reveals diminishing returns above $1M — the jump from $100K to $1M produces a 1.3-point readiness improvement, while the jump from $1M to $5M produces only 0.5 points. This suggests that the highest-leverage investments are in the $500K–$1M range, where organizations are building core governance infrastructure, dedicated roles, and tooling. For budget planning purposes, organizations targeting ISO 42001 certification within 18 months should anticipate governance program costs of $500K–$2M depending on organizational size and complexity. This includes dedicated governance staff (1–3 FTEs), tooling (system registry, risk assessment, audit management), training (management system skills, AI governance fundamentals), and pre-certification audit support. COMPEL's structured approach can reduce these costs by providing a proven methodology, assessment tools, and workforce development pathway that eliminates the need for organizations to design their governance program from scratch.

Actionable Recommendations

Based on this readiness analysis, we recommend the following prioritized actions for organizations pursuing ISO 42001 certification: 1. Close the Clause 9 gap first. Performance evaluation is the weakest area and the one auditors scrutinize most heavily. Invest in: (a) AI-specific internal audit capability — either train existing ISO auditors or engage external expertise; (b) monitoring systems that track governance process effectiveness, not just model performance; (c) management review processes that include AI governance KPIs on the agenda. 2. Build on Clause 6 strength. Your planning foundation is your best asset. Convert risk assessments into actionable controls (Clause 8), establish measurement criteria for those controls (Clause 9), and create improvement processes when controls underperform (Clause 10). 3. Leverage existing management systems. If you have ISO 27001, extend it to cover AI risks rather than creating a parallel system. If you have SOC 2, use the evidence collection infrastructure for AI-specific controls. The ISO HLS structure means significant reuse is possible. 4. Use the COMPEL framework as the implementation scaffold. COMPEL's 18 domains map directly to ISO 42001 clauses and Annex A controls, providing a structured assessment and implementation pathway. The Calibrate stage establishes your ISO 42001 readiness baseline, and subsequent stages build the capabilities needed for each clause. 5. Budget for 12–18 months and $500K–$1M for a mid-market organization. Adjust upward for large enterprises with complex AI portfolios and multiple regulatory jurisdictions.

References

Methodology

Readiness assessment against ISO 42001:2023 clauses 4–10 and Annex A. 280 organizations, 6 industries. COMPEL ISO 42001 readiness methodology. All figures illustrative.

References

  1. ISO/IEC. "ISO/IEC 42001:2023 — Information Technology — Artificial Intelligence — Management System." International Organization for Standardization, 2023.
  2. ISO/IEC. "ISO/IEC 27001:2022 — Information Security Management Systems." International Organization for Standardization, 2022.
  3. ISO/IEC. "ISO/IEC TS 42006:2024 — Requirements for AI System Audit." International Organization for Standardization, 2024.
  4. NIST. "AI Risk Management Framework (AI RMF 1.0)." National Institute of Standards and Technology, 2023.
  5. European Parliament. "Regulation (EU) 2024/1689 — EU AI Act." Official Journal of the European Union, 2024.
  6. Abdelalim, T. "The COMPEL Enterprise AI Transformation Framework." FlowRidge, 2025.
  7. BSI Group. "ISO 42001 Implementation Guide." British Standards Institution, 2024.
  8. ISACA. "Auditing AI Management Systems: Practical Guidance for ISO 42001." ISACA, 2025.
  9. Deloitte. "The AI Governance Imperative: ISO 42001 Readiness Survey." Deloitte, 2025.
  10. PwC. "Responsible AI Framework and ISO 42001 Alignment." PricewaterhouseCoopers, 2025.

FAQs

What is ISO 42001 and why does it matter for AI governance?
ISO/IEC 42001:2023 is the first international standard specifically for AI management systems. It provides requirements for organizations to establish, implement, maintain, and continually improve an AI management system. It matters because it is increasingly referenced by regulators (the EU AI Act recognizes ISO 42001 as a means of demonstrating conformity), customers (enterprise procurement increasingly requires it), and boards (as evidence of responsible AI governance).
Why is Clause 9 the weakest area across industries?
Clause 9 (Performance Evaluation) requires monitoring and measurement systems, internal audit programs, and management review processes specifically for AI management. Most organizations have invested in building AI systems (Clause 8) and planning risk controls (Clause 6), but have not yet built the evaluation infrastructure to verify that these systems and controls are working as intended. This mirrors a common pattern in new management systems — organizations build before they verify.
How does existing ISO 27001 certification help with ISO 42001?
ISO 42001 uses the same high-level structure (HLS) as ISO 27001, meaning Clauses 4–10 have the same structural requirements. Organizations with ISO 27001 have already built management system fundamentals: document control, internal audit processes, management review cadence, and continuous improvement mechanisms. These transfer directly to ISO 42001, with the primary additional work being AI-specific risk assessment, AI lifecycle controls, and AI-specific performance evaluation criteria.
Is this data from actual ISO 42001 certification assessments?
The data is illustrative. Figures are derived from composite analysis of readiness assessments, gap analyses, and pre-certification reviews, cross-referenced with publicly available ISO implementation data and AI governance survey results. No single organization's data is represented. The purpose is to illustrate readiness patterns across industries to inform governance program design and certification planning.
What should an organization do first to prepare for ISO 42001?
Start with a structured readiness assessment against all 7 clauses and Annex A controls — COMPEL's Calibrate stage provides this. Then prioritize Clause 9 (Performance Evaluation) as the highest-leverage gap: establish AI-specific internal audit capability, define monitoring and measurement criteria for your AI management system, and create management review processes that include AI governance KPIs. If you have ISO 27001, extend your existing ISMS rather than building a parallel system.
ISO 42001 AI management system certification readiness AI governance COMPEL compliance