Cross-Framework Compliance Mapping
A unified, bidirectional matrix that links equivalent or related requirements across three of the most widely adopted AI governance instruments: the EU Artificial Intelligence Act, the NIST AI Risk Management Framework 1.0, and ISO/IEC 42001:2023. Every row is also anchored to a COMPEL transformation stage so practitioners can operationalize requirements in a single integrated program.
63
Mapping Rows
32
Full Alignment
23
Partial Alignment
8
Gaps
Alignment Taxonomy
Requirements are substantially equivalent; satisfying one typically satisfies the others with minimal additional work.
Related intent but differing scope, depth, or terminology. Extra evidence work required for complete compliance.
Requirements address distinct but reinforcing concerns; both should be implemented together.
A requirement exists in one framework with no direct counterpart in another — a coverage risk.
Framework Requirements Mapping
Each row links an EU AI Act article, a NIST AI RMF subcategory, and an ISO/IEC 42001 clause to the same underlying capability, mapped to its COMPEL stage and domain.
| ID | EU AI Act | NIST AI RMF | ISO/IEC 42001 | COMPEL | Alignment |
|---|---|---|---|---|---|
| CFM-001 | Article 9(1) Risk Management System | Govern / GV-1 GV-1.1: Legal and regulatory requirements are identified. | Clause 4.1 Understanding the organization and its context | Calibrate Risk Management | full All three frameworks require foundational context-setting and risk system establishment. |
| CFM-002 | Article 9(2)(d) Risk Management Measures | Govern / GV-1 GV-1.2: Trustworthy AI characteristics are integrated into organizational policies. | Clause 5.2 AI Policy | Organize Policy & Governance | full Policy-level alignment: all frameworks require organizational commitment to AI risk management. |
| CFM-003 | No explicit requirement | Govern / GV-1 GV-1.3: Processes for determining AI system trustworthiness are defined. | Clause 5.3 Organizational roles, responsibilities and authorities | Organize Organizational Structure | partial EU AI Act does not explicitly mandate trustworthiness determination processes; covered implicitly via provider obligations. |
| CFM-004 | Article 9(7) Deployer Information | Govern / GV-2 GV-2.1: Roles and responsibilities for AI risk management are defined and understood. | Clause 5.3 Organizational roles | Organize Stakeholder Engagement | full All three frameworks require clear role assignment for AI governance. |
| CFM-005 | No explicit requirement | Govern / GV-3 GV-3.1: Decision-making related to AI risks is informed by a diverse team. | Clause 7.2 Competence | Organize Workforce & Competence | partial EU AI Act addresses competence implicitly through provider qualifications; NIST is more explicit on diversity. |
| CFM-006 | No explicit requirement | Govern / GV-3 GV-3.2: Policies and procedures for AI system oversight are established. | Clause 9.1 Monitoring, measurement, analysis and evaluation | Evaluate Monitoring & Oversight | partial NIST explicitly references organizational oversight policies; ISO 42001 covers via monitoring clauses. |
| CFM-007 | No explicit requirement | Govern / GV-4 GV-4.1: Organizational practices are reviewed for alignment with AI risk management. | Clause 9.3 Management review | Learn Continuous Improvement | partial EU AI Act post-market monitoring (Art 72) aligns but is specific to market surveillance rather than organizational review. |
| CFM-008 | No explicit requirement | Govern / GV-4 GV-4.2: Organizational teams document AI system impacts and share risk information. | Clause 7.4 Communication | Organize Communication | partial EU AI Act transparency obligations (Art 13) partially address communication but focus on deployer-provider communication. |
| CFM-009 | No explicit requirement | Govern / GV-5 GV-5.1: Organizational policies for the use of third-party AI systems are established. | Clause 8.1 Operational planning and control | Organize Supply Chain | partial EU AI Act addresses supply chain via Art 25 (obligations along the AI value chain) but not within Art 8-15. |
| CFM-010 | No explicit requirement | Govern / GV-6 GV-6.1: Policies and procedures for addressing AI risks arising from third parties are defined. | Annex A.6 Supplier management | Organize Vendor Risk | gap EU AI Act Art 8-15 do not explicitly cover third-party/supplier risk; addressed elsewhere in the regulation (Art 25, 28). |
| CFM-011 | Article 6 Classification Rules for High-Risk AI | Map / MP-1 MP-1.1: Context of the AI system is established and understood. | Clause 6.1 Actions to address risks and opportunities | Calibrate System Classification | full All three frameworks require system-level risk/context determination before proceeding with requirements. |
| CFM-012 | Article 7 Amendments to Annex III | Map / MP-2 MP-2.1: AI system purpose, context of use, and impacts are documented. | Clause 6.1.2 AI risk assessment | Calibrate Impact Assessment | partial EU AI Act classification is binary (high-risk or not); NIST and ISO require more nuanced impact assessment. |
| CFM-013 | Article 9(2)(a) Risk Identification | Map / MP-2 MP-2.2: Assumptions, constraints, and context of deployment are identified. | Clause 6.1.2 AI risk assessment | Calibrate Risk Identification | full Strong alignment across all three frameworks on risk identification methodology. |
| CFM-014 | Article 9(2)(b) Misuse Risk Estimation | Map / MP-3 MP-3.1: AI system risks of unintended or emergent properties are identified. | Annex A.3 Risk management for AI systems | Calibrate Misuse Analysis | full All three frameworks specifically address foreseeable misuse and unintended consequences. |
| CFM-015 | Article 10(3) Data Representativeness | Map / MP-4 MP-4.1: AI system impacts on individuals, groups, communities, and the environment are characterized. | Annex A.4 AI system impact assessment | Model Fairness & Impact | partial EU AI Act focuses on data quality; NIST and ISO emphasize broader societal impact characterization. |
| CFM-016 | Article 10(2)(f) Bias Examination | Map / MP-5 MP-5.1: AI system impacts on specific communities are assessed. | Annex A.4 AI system impact assessment | Model Bias & Fairness | full All three frameworks require bias examination, though NIST and ISO are broader than data bias alone. |
| CFM-017 | Article 15(1) Accuracy | Measure / MS-1 MS-1.1: Appropriate methods and metrics for measuring AI risks are identified. | Clause 9.1 Monitoring and measurement | Evaluate Performance Measurement | full All three frameworks require defined metrics for measuring AI system performance and risk. |
| CFM-018 | Article 15(2) Accuracy Metrics | Measure / MS-1 MS-1.2: AI systems are evaluated for trustworthy characteristics. | Annex A.7 AI system performance | Evaluate Metrics & Reporting | full Strong alignment on requirement to measure and report performance metrics. |
| CFM-019 | Article 9(5) Risk Testing | Measure / MS-2 MS-2.1: AI systems are tested for trustworthy characteristics. | Clause 8.1 Operational planning and control | Evaluate Testing & Validation | full Testing is a core requirement across all three frameworks. |
| CFM-020 | Article 15(4) Adversarial Resilience | Measure / MS-2 MS-2.2: AI systems are evaluated for security and resilience. | Annex A.8 Information security for AI | Evaluate Security & Resilience | full All three frameworks explicitly address adversarial resilience and cybersecurity for AI. |
| CFM-021 | Article 15(3) Robustness | Measure / MS-2 MS-2.3: AI system performance is evaluated in deployment context. | Annex A.7 AI system performance | Evaluate Robustness | full Robustness requirements are well-aligned across all frameworks. |
| CFM-022 | Article 11(1) Technical Documentation | Measure / MS-3 MS-3.1: AI risk measurement approaches are documented. | Clause 7.5 Documented information | Produce Documentation | full Documentation requirements exist across all three frameworks, though EU AI Act is most prescriptive via Annex IV. |
| CFM-023 | Article 9(2)(c) Post-Market Risk Evaluation | Measure / MS-3 MS-3.2: AI risk measurements are integrated into organizational processes. | Clause 10.1 Continual improvement | Learn Post-Market Monitoring | full All frameworks require ongoing measurement and improvement based on operational data. |
| CFM-024 | Article 10(2)(f) Bias Detection | Measure / MS-4 MS-4.1: Measurement approaches for AI system fairness are applied. | Annex A.5 AI data management | Evaluate Fairness Measurement | full Bias measurement is a shared requirement; NIST provides most detailed fairness metrics guidance. |
| CFM-025 | Article 9(2)(d) Risk Mitigation | Manage / MG-1 MG-1.1: AI risks based on assessments and other analytical output are prioritized and treated. | Clause 6.1.3 AI risk treatment | Model Risk Treatment | full Risk treatment/mitigation is a universal requirement across all three frameworks. |
| CFM-026 | Article 14(1) Human Oversight Design | Manage / MG-2 MG-2.1: Strategies for maximizing AI benefits and minimizing negative impacts are planned and prepared. | Annex A.2 AI governance | Model Human Oversight | full Human oversight is explicit in EU AI Act and ISO 42001; NIST addresses it through governance strategies. |
| CFM-027 | Article 14(3)(a) Override Mechanisms | Manage / MG-2 MG-2.2: Mechanisms for human oversight and control are implemented. | Annex A.2 AI governance | Produce Human Control | full Override and stop mechanisms are well-aligned across frameworks. |
| CFM-028 | Article 12(1) Automatic Logging | Manage / MG-2 MG-2.3: Procedures for monitoring AI system behavior are established. | Clause 9.1 Monitoring and measurement | Produce Logging & Monitoring | full Monitoring and logging are core requirements across all three frameworks. |
| CFM-029 | Article 13(1) Transparency Design | Manage / MG-3 MG-3.1: AI risks are communicated to relevant stakeholders. | Clause 7.4 Communication | Produce Transparency | full Transparency and communication requirements are well-aligned. |
| CFM-030 | Article 13(3)(a) Instructions for Use | Manage / MG-3 MG-3.2: Risk-relevant information is documented and shared with appropriate stakeholders. | Clause 7.5 Documented information | Learn Documentation & Communication | full All three frameworks require comprehensive documentation shared with stakeholders. |
| CFM-031 | Article 9(8) Continuous Risk Updating | Manage / MG-4 MG-4.1: Post-deployment AI system monitoring plans are implemented. | Clause 10.1 Continual improvement | Learn Continuous Improvement | full Continuous monitoring and improvement is a fundamental shared requirement. |
| CFM-032 | Article 12(4) Log Retention | Manage / MG-4 MG-4.2: Mechanisms for monitoring AI system performance are established. | Clause 7.5.3 Control of documented information | Evaluate Record Retention | partial EU AI Act prescribes minimum retention (6 months); NIST and ISO require controls without specific durations. |
| CFM-033 | Article 10(1) Data Governance | Map / MP-4 MP-4.2: Measurement and evaluation of data quality is established. | Annex A.5 AI data management | Model Data Quality | full Data quality requirements are a core element across all three frameworks. |
| CFM-034 | Article 10(2)(a) Data Collection Processes | Map / MP-4 MP-4.3: Data provenance and lineage are tracked. | Annex A.5 AI data management | Organize Data Provenance | full Data provenance is explicitly required in all three frameworks. |
| CFM-035 | Article 10(5) Special Category Data | Govern / GV-6 GV-6.2: Policies and procedures for the handling of sensitive data are defined. | Annex A.5 AI data management | Organize Data Privacy | partial EU AI Act provides a specific derogation for bias detection; NIST and ISO take broader data sensitivity approaches. |
| CFM-036 | Article 13(3)(b) Intended Purpose | Map / MP-1 MP-1.2: The scope and constraints of the AI system are documented. | Clause 6.2 AI objectives and planning | Calibrate Purpose Definition | full All frameworks require clear articulation of system purpose and scope. |
| CFM-037 | Article 13(3)(c) Risk Disclosure | Manage / MG-3 MG-3.1: AI risks are communicated to relevant stakeholders. | Clause 7.4 Communication | Learn Risk Communication | full Risk disclosure is a shared obligation across all three frameworks. |
| CFM-038 | Article 52 Transparency for AI Interaction | Manage / MG-3 MG-3.2: Risk-relevant information is documented and shared with appropriate stakeholders. | Annex A.2 AI governance | Produce User Notification | partial EU AI Act Art 52 is specific about AI interaction disclosure; NIST and ISO are more general. |
| CFM-039 | Article 13(3)(d) Human Oversight Documentation | Manage / MG-2 MG-2.2: Mechanisms for human oversight and control are implemented. | Annex A.2 AI governance | Produce Oversight Documentation | full Human oversight documentation is required across all frameworks. |
| CFM-040 | Article 13(3)(e) Lifecycle Maintenance | Manage / MG-4 MG-4.1: Post-deployment AI system monitoring plans are implemented. | Clause 8.1 Operational planning and control | Learn Lifecycle Management | partial EU AI Act is specific about maintenance documentation; NIST and ISO address lifecycle management more broadly. |
| CFM-041 | Article 14(2) Human Intervention Capability | Manage / MG-2 MG-2.1: Strategies for maximizing AI benefits and minimizing negative impacts are planned. | Annex A.2 AI governance | Produce Human Intervention | full Human intervention capability is required across all three frameworks. |
| CFM-042 | Article 14(3)(b) Real-time Monitoring | Manage / MG-4 MG-4.1: Post-deployment AI system monitoring plans are implemented. | Clause 9.1 Monitoring and measurement | Evaluate Real-time Monitoring | full Real-time monitoring capability is shared across frameworks, though EU AI Act is most prescriptive. |
| CFM-043 | Article 14(4) Decision Contestation | Manage / MG-3 MG-3.1: AI risks are communicated to relevant stakeholders. | Annex A.2 AI governance | Learn Redress & Contestation | partial EU AI Act is most explicit on individual contestation rights; NIST and ISO address through stakeholder engagement. |
| CFM-044 | Article 14(5) Biometric Dual Verification | No explicit requirement | No explicit requirement | Produce Human Oversight | gap EU AI Act biometric dual-verification requirement has no direct equivalent in NIST or ISO 42001. |
| CFM-045 | Article 12(2) Event Traceability | Measure / MS-3 MS-3.1: AI risk measurement approaches are documented. | Clause 9.2 Internal audit | Evaluate Audit & Traceability | partial EU AI Act focuses on technical traceability; ISO 42001 emphasizes management system audits. |
| CFM-046 | Article 12(3) Audit Trail for Risks | Manage / MG-2 MG-2.3: Procedures for monitoring AI system behavior are established. | Clause 9.1 Monitoring and measurement | Evaluate Risk Monitoring | full Risk-based monitoring is well-aligned across all three frameworks. |
| CFM-047 | Article 12(5) Authority Access | Govern / GV-1 GV-1.1: Legal and regulatory requirements are identified. | Clause 7.5.3 Control of documented information | Organize Regulatory Access | partial EU AI Act is specific about authority access; NIST and ISO address through general compliance. |
| CFM-048 | Article 15(5) Cybersecurity | Measure / MS-2 MS-2.2: AI systems are evaluated for security and resilience. | Annex A.8 Information security for AI | Produce Cybersecurity | full Cybersecurity is a shared requirement; ISO 42001 Annex A.8 provides the most detailed control framework. |
| CFM-049 | Article 15(5) Fail-safe Mechanisms | Manage / MG-2 MG-2.1: Strategies for maximizing AI benefits and minimizing negative impacts are planned. | Annex A.7 AI system performance | Produce Resilience | partial EU AI Act is most explicit about fail-safes; NIST and ISO address through broader reliability requirements. |
| CFM-050 | Article 53(1)(a) GPAI Technical Documentation | Govern / GV-1 GV-1.2: Trustworthy AI characteristics are integrated into organizational policies. | Clause 7.5 Documented information | Produce GPAI Documentation | partial GPAI documentation requirements are EU AI Act-specific; NIST and ISO address documentation generally. |
| CFM-051 | Article 53(1)(b) GPAI Downstream Provider Information | Manage / MG-3 MG-3.2: Risk-relevant information is documented and shared with appropriate stakeholders. | Annex A.6 Supplier management | Learn Supply Chain Transparency | partial GPAI supply chain transparency is EU AI Act-specific; mapped to general supply chain requirements in others. |
| CFM-052 | Article 53(1)(c) GPAI Copyright Compliance | No explicit requirement | No explicit requirement | Organize Intellectual Property | gap EU AI Act copyright compliance for GPAI has no equivalent in NIST or ISO 42001. |
| CFM-053 | Article 53(1)(d) GPAI Training Data Summary | Map / MP-4 MP-4.3: Data provenance and lineage are tracked. | Annex A.5 AI data management | Organize Data Transparency | partial EU AI Act requires public summary; NIST and ISO require internal documentation of data provenance. |
| CFM-054 | No explicit requirement | Govern / GV-2 GV-2.2: Mechanisms for organizational accountability are established. | Clause 4.2 Understanding needs and expectations of interested parties | Calibrate Stakeholder Analysis | partial ISO 42001 stakeholder analysis is more comprehensive than EU AI Act provider-deployer focus. |
| CFM-055 | No explicit requirement | Govern / GV-3 GV-3.1: Decision-making related to AI risks is informed by a diverse team. | Clause 4.3 Scope of the AI management system | Calibrate Scope Definition | partial ISO 42001 scope-setting is a management system requirement with no direct EU AI Act equivalent. |
| CFM-056 | No explicit requirement | No explicit requirement | Clause 5.1 Leadership and commitment | Calibrate Leadership Commitment | gap ISO 42001 leadership commitment clause has no direct equivalent in EU AI Act or NIST AI RMF. |
| CFM-057 | No explicit requirement | Govern / GV-5 GV-5.1: Organizational policies for the use of third-party AI systems are established. | Clause 7.1 Resources | Organize Resource Planning | partial Resource planning is an ISO management system requirement; NIST addresses indirectly. |
| CFM-058 | No explicit requirement | No explicit requirement | Clause 7.3 Awareness | Organize Workforce Awareness | gap ISO 42001 workforce awareness requirement has no direct equivalent in EU AI Act or NIST. |
| CFM-059 | No explicit requirement | No explicit requirement | Clause 9.2 Internal audit | Evaluate Internal Audit | gap ISO 42001 internal audit is a management system requirement; EU AI Act uses conformity assessment (Art 43) instead. |
| CFM-060 | No explicit requirement | No explicit requirement | Clause 10.2 Nonconformity and corrective action | Learn Corrective Action | gap ISO 42001 corrective action process has no direct equivalent in EU AI Act or NIST AI RMF. |
| CFM-061 | No explicit requirement | Map / MP-3 MP-3.2: Potential for AI system emergent behavior is characterized. | No explicit requirement | Model Emergent Behavior | gap NIST uniquely emphasizes emergent behavior characterization; not explicitly in EU AI Act or ISO 42001. |
| CFM-062 | No explicit requirement | Measure / MS-4 MS-4.2: Feedback and reporting mechanisms for AI risks are established. | Clause 7.4 Communication | Learn Feedback Mechanisms | partial NIST emphasizes feedback loops; ISO covers through communication requirements. |
| CFM-063 | Article 8 Compliance with Requirements | Govern / GV-1 GV-1.1: Legal and regulatory requirements are identified. | Clause 4.1 Understanding context | Calibrate Regulatory Compliance | full The overarching compliance requirement maps well across all frameworks. |
How to Use Cross-Framework Mapping
1. Identify your primary obligation
Pick the framework that is legally binding or contractually required for your organization. Use it as the anchor column in the table.
2. Reuse evidence across frameworks
Where the alignment is full or partial, the same artifacts, assessments, and control evidence can typically satisfy multiple frameworks with minor adaptation.
3. Close gaps deliberately
For rows flagged gap, plan compensating controls or additional evidence in the relevant COMPEL stage so the weaker framework does not become an audit risk.