COMPEL Body of Knowledge — Regulatory Bridge Series Cluster A Flagship Article — Dual-Compliance Crosswalk
Why a crosswalk matters {#why}
Organizations that take AI governance seriously rarely get to pick one standard. US-headquartered enterprises typically align to the NIST AI Risk Management Framework (AI RMF 1.0) because it is increasingly cited in federal procurement and state-level AI legislation. The same enterprises, when they sell into the European Union or operate in regulated industries, simultaneously need to demonstrate conformance to ISO/IEC 42001:2023 — the first certifiable AI management system standard — because it is emerging as the presumed-conformance path for EU AI Act high-risk obligations.
Running the two frameworks as separate programs creates three problems:
- Duplicate evidence collection. A single model card is written once for NIST MAP 2.1 and re-written again for ISO 42001 Annex A.8.2.
- Conflicting governance rhythms. NIST AI RMF playbook activities are continuous; ISO 42001 internal audits and management reviews are periodic. Without a unified cadence, teams context-switch between the two.
- Fragmented accountability. Risk owners often end up with two risk registers — one structured by NIST AI RMF outcomes, one structured by ISO 42001 clauses — containing the same risks expressed differently.
A crosswalk solves this by making the two frameworks addressable from the same operating model. One evidence artifact satisfies both. One control generates both sets of proof. One review cycle keeps both current.
The crosswalk, at a glance {#crosswalk}
| NIST AI RMF function | ISO 42001 clause(s) | ISO 42001 Annex A control(s) | Shared evidence artifact |
|---|---|---|---|
| GOVERN 1 — Context and strategy | 4.1 Context · 5.1 Leadership · 5.2 AI Policy | A.2.2, A.2.3 | AI governance charter · Policy statement |
| GOVERN 2 — Roles and responsibilities | 5.3 Roles · 7.2 Competence | A.3.2, A.4.2 | RACI matrix · Competency register |
| GOVERN 3 — Accountability | 5.1 Leadership · 9.3 Management review | A.2.4 | Board AI committee minutes |
| GOVERN 4 — Culture of risk | 7.3 Awareness · 7.4 Communication | A.4.3 | AI literacy program records |
| GOVERN 5 — Stakeholder engagement | 4.2 Interested parties (partial) | — (supplemental) | Stakeholder register + engagement log |
| GOVERN 6 — Third-party risk | 8.3 System impact · A.10 | A.10.2, A.10.3 | Supplier AI risk assessment |
| MAP 1 — AI system context | 6.1.4 Impact assessment | A.5.2 | AI System Impact Assessment (AIIA) |
| MAP 2 — Model and data characteristics | 8.2 System design · 8.4 Data | A.6.2, A.7.2, A.8.2 | Model card · Data sheet |
| MAP 3 — Benefits, costs, risks | 6.1 Risk · 6.1.2 Criteria | A.5.3 | AI risk register entry |
| MAP 4 — Impacts on individuals | 6.1.4 Impact assessment | A.5.2 | Fundamental-rights impact assessment |
| MAP 5 — Purposes limits | 8.2 System design | A.6.2.2 | AI system purpose specification |
| MEASURE 1 — Evaluation plans | 8.1 Operations | A.6.2.5 | AI system test and evaluation plan |
| MEASURE 2 — Trustworthy characteristics | 8.1, 9.1 | A.6.2.5, A.9.2 | Fairness, robustness, explainability tests |
| MEASURE 3 — Recurring tracking | 9.1 Monitoring | A.9.2 | Performance monitoring dashboard |
| MEASURE 4 — Feedback | 9.1, 10.1 | — (supplemental) | User / stakeholder feedback log |
| MANAGE 1 — Risk prioritization | 6.1.3 Risk treatment | A.5.4 | Risk treatment plan |
| MANAGE 2 — Strategies to mitigate | 8.1, 10.1 | A.6.2.6 | AI system change log |
| MANAGE 3 — Third-party risk response | 8.3 | A.10.3 | Supplier AI governance agreement |
| MANAGE 4 — Residual risk and incidents | 10.1 Nonconformity | A.6.2.8 | AI incident register |
Four functions × nineteen categories on the NIST side map to seven ISO clauses × thirty-eight Annex A controls on the ISO side. The table above compresses that mapping into its most usable form: one artifact per row that auditors from either framework accept.
How to operate the crosswalk {#operate}
1. Pick ISO 42001 as the backbone
ISO 42001 is a management-system standard. It defines how an organization operates — the plan-do-check-act loop, the roles, the policy hierarchy, the review cadence. NIST AI RMF is a catalog of outcomes — what an organization must achieve, without prescribing how. Running ISO 42001 as the backbone creates a stable operating model. NIST AI RMF activities are then scheduled inside the management system: MAP activities inside clause 6.1.4 impact assessment, MEASURE activities inside clause 9.1 monitoring, MANAGE activities inside clause 10.1 nonconformity handling.
2. Turn each shared artifact into a template
Instead of writing two model cards — one for NIST, one for ISO — write one template that satisfies both. A good model-card template includes:
- Intended purpose and out-of-scope uses (NIST MAP 5.1 · ISO A.6.2.2)
- Training data sources and governance (NIST MAP 2.2 · ISO A.7.2)
- Evaluation methodology and results (NIST MEASURE 2.x · ISO A.6.2.5)
- Known limitations and failure modes (NIST MANAGE 2.1 · ISO A.6.2.8)
- Responsible owner and escalation contact (NIST GOVERN 2 · ISO A.3.2)
Auditors of either framework find the information they need in the same document. Your teams maintain it once.
3. Schedule NIST activities inside ISO cadences
| ISO 42001 cadence | NIST activities embedded |
|---|---|
| Daily operations | MEASURE 3 — recurring monitoring |
| Monthly review | MANAGE 1, MANAGE 2 — risk prioritization and treatment updates |
| Quarterly internal audit | GOVERN 1, GOVERN 2 — policy and role refresh; MAP 1–5 spot-checks |
| Annual management review | GOVERN 3 — accountability review; MEASURE 1, MEASURE 4 — evaluation plan and feedback loop refresh |
| Per AI-system gate review | MAP 1–5, MEASURE 1–3, MANAGE 1–4 for that specific system |
4. Use one risk register with dual keys
Every AI risk entry carries two tags: a NIST AI RMF function/category (e.g., MANAGE 1.3) and an ISO 42001 control (e.g., A.5.4). A single report filter produces either a NIST-style profile or an ISO Annex A status report. The risk register becomes the single source of truth.
5. Map to COMPEL stages
COMPEL’s six stages (Calibrate, Organize, Model, Produce, Evaluate, Learn) already align with both frameworks. The crosswalk extends COMPEL’s existing stage-to-standard maps by collapsing NIST and ISO into shared activities per stage.
| COMPEL stage | NIST AI RMF focus | ISO 42001 focus | Shared output |
|---|---|---|---|
| Calibrate | GOVERN 1 · MAP 1, MAP 3 | 4.1, 6.1 | Baseline risk profile |
| Organize | GOVERN 2, GOVERN 3, GOVERN 4 | 5.3, 7.2, 7.3 | RACI, competency register, AI policy |
| Model | MAP 2, MAP 4, MAP 5 | 8.2, 8.4, A.6–A.8 | Model and data documentation |
| Produce | MEASURE 1, MEASURE 2 | 8.1, A.6.2.5 | Evaluation and deployment evidence |
| Evaluate | MEASURE 3, MEASURE 4 | 9.1, 9.2 | Monitoring outputs, audit findings |
| Learn | MANAGE 1–4 | 10.1, 10.2 | Incident register, corrective actions |
Evidence artifacts the crosswalk produces {#evidence}
A dual-compliance operating model yields these core artifacts. Each is sufficient, by itself, to satisfy both frameworks when the relevant clauses and categories are cited:
- AI governance charter (board-approved)
- AI policy statement
- Role and competency register (RACI + skills matrix)
- AI system inventory with risk classification
- AI System Impact Assessment per system
- Model card and data sheet per system
- Evaluation and testing plan and results
- Monitoring dashboard and alert thresholds
- AI risk register (dual-keyed)
- Incident register and post-incident review records
- Internal audit program and reports
- Management review minutes
- Supplier AI risk assessments and agreements
- Change log per AI system
- User and stakeholder feedback log
- AI literacy / training records
Retain each artifact for at least the life of the AI system plus three years, or per local retention obligations (the EU AI Act requires ten years after last placement on market for high-risk systems).
Metrics {#metrics}
Dual-compliance programs report on the same metrics either framework requires:
- Percentage of AI systems with complete dual-framework artifacts
- Percentage of MAP/MEASURE/MANAGE activities completed on schedule
- Number of internal audit findings, split by root cause (process, people, tooling)
- Time to close nonconformities (clause 10.1)
- Supplier AI risk coverage — percentage of AI vendors under active governance
- Feedback-loop response time — time between user report and risk register entry
Risks if skipped {#risks}
Running the frameworks independently exposes the organization to:
- Evidence drift — two sources of truth diverge; auditors find contradictions.
- Double cost — teams produce parallel artifacts; governance overhead doubles.
- Gap risk — neither framework fully covers GOVERN 5 (stakeholder engagement) and MEASURE 4 (feedback loops); without a crosswalk these fall between the cracks.
- Certification delay — ISO 42001 certification audits flag missing controls that a crosswalk would have caught months earlier.
- Procurement loss — US federal and some state contracts require NIST AI RMF alignment; EU contracts increasingly require ISO 42001. Missing either closes markets.
Related standards and references {#references}
- NIST AI Risk Management Framework 1.0 — nist.gov/itl/ai-risk-management-framework. Referenced throughout for GOVERN/MAP/MEASURE/MANAGE functions and categories.
- ISO/IEC 42001:2023 — Information technology — Artificial intelligence — Management system — iso.org/standard/81230.html. Clauses 4–10 and Annex A controls A.1–A.10.
- EU AI Act (Regulation 2024/1689) — eur-lex.europa.eu. Article 17 (quality management system) where ISO 42001 conformance is presumed.
- NIST AI RMF Playbook — airc.nist.gov/AI_RMF_Knowledge_Base/Playbook. Suggested activities per category.
- ISO/IEC 23894:2023 — AI risk management guidance — iso.org/standard/77304.html. Risk-management reference cited in ISO 42001.
Related COMPEL articles
- ISO 42001 Implementation Using COMPEL
- NIST AI RMF Alignment with COMPEL Stages
- Industry Standards for Agentic AI — ISO, NIST, and Emerging Frameworks
- Building EU AI Act Evidence Portfolios
How to cite
COMPEL FlowRidge Team. (2026). “NIST AI RMF to ISO 42001 Crosswalk: A Dual-Compliance Operating Map.” COMPEL Framework by FlowRidge. https://www.compelframework.org/articles/seo-a1-nist-ai-rmf-iso-42001-crosswalk/