COMPEL Certification Body of Knowledge — Module 2.6: Industry-Specific Applications Article 15 of 15
The NIST AI Risk Management Framework (AI RMF) 1.0, published by the National Institute of Standards and Technology in January 2023, is the most influential voluntary AI governance framework in the United States. While not legally binding, the AI RMF is increasingly referenced by federal agencies, state regulators, and industry bodies as the expected standard for responsible AI risk management. Organizations operating in or doing business with the United States, or those seeking to demonstrate alignment with internationally recognized best practices, benefit from NIST AI RMF adoption.
This article provides a detailed mapping of the NIST AI RMF’s four functions — GOVERN, MAP, MEASURE, and MANAGE — to the COMPEL lifecycle stages, with practical guidance for using COMPEL to operationalize each NIST recommendation.
NIST AI RMF Architecture
The NIST AI RMF is organized around four core functions, each containing categories and subcategories:
GOVERN: The cross-cutting function that establishes the organizational context for AI risk management. It addresses policies, processes, roles, training, culture, and stakeholder engagement. GOVERN is not sequential — it operates continuously across the entire AI lifecycle.
MAP: The contextual function that identifies and characterizes AI system risks. It covers understanding intended purpose, categorizing systems, assessing impacts, and documenting knowledge limits. MAP establishes what needs to be managed.
MEASURE: The evaluation function that applies quantitative and qualitative methods to assess AI risks and system performance. It covers testing, fairness evaluation, security assessment, and ongoing measurement.
MANAGE: The operational function that treats identified and measured risks. It covers deployment decisions, risk treatment, incident response, post-deployment monitoring, and continual improvement.
The AI RMF also includes the AI RMF Playbook, which provides suggested actions and references for each subcategory. The Playbook is non-normative — it offers guidance rather than requirements.
Function-to-Stage Mapping
GOVERN → All COMPEL Stages (Cross-Cutting)
The GOVERN function is explicitly cross-cutting in the NIST framework, and it maps across the entire COMPEL lifecycle. However, different GOVERN subcategories have primary alignment with specific COMPEL stages:
GOVERN 1 (Policies, Processes, Procedures, and Practices) aligns primarily with the Organize stage. This is where the organization establishes the governance infrastructure that NIST calls for:
-
GOVERN 1.1 (Legal and regulatory requirements): During the Organize stage, build and maintain a regulatory register that catalogs all applicable AI laws, regulations, standards, and guidelines. Map each requirement to organizational policies. Assign regulatory monitoring responsibilities. Review the register quarterly and after significant regulatory developments.
-
GOVERN 1.2 (Trustworthy AI in policies): During the Organize stage, integrate trustworthy AI characteristics — validity, reliability, safety, security, resilience, accountability, transparency, explainability, interpretability, privacy, and fairness — into organizational policies. This does not mean creating a separate AI policy for each characteristic; it means ensuring existing policies address these characteristics in the AI context.
-
GOVERN 1.3 (Processes for determining AI effects): During the Calibrate stage, establish processes for impact determination. NIST emphasizes that these processes should evaluate effects on “individuals, groups, communities, organizations, and society.” Use the COMPEL impact assessment framework, which covers all these levels.
-
GOVERN 1.4 (Risk management process integration): During the Organize stage, integrate AI risk management into the enterprise risk management framework. Avoid creating a standalone AI risk program that operates in isolation from enterprise risk governance.
-
GOVERN 1.5 (Ongoing monitoring and improvement): During the Evaluate and Learn stages, implement the monitoring cadence and review processes that keep risk management current.
-
GOVERN 1.6 (Mechanisms to supersede/deactivate): During the Produce stage, implement circuit breakers, kill switches, and override mechanisms for all deployed AI systems. Test these mechanisms at defined intervals.
-
GOVERN 1.7 (Decommissioning processes): During the Learn stage, establish retirement procedures that address data disposition, stakeholder notification, and transition planning.
GOVERN 2 (Roles and Responsibilities) aligns primarily with the Organize stage:
-
GOVERN 2.1 (Documented roles and responsibilities): Create the RACI matrix during Organize that maps every AI governance activity to responsible roles. NIST emphasizes that these must be “clear to individuals and teams throughout the organization” — not just documented but understood.
-
GOVERN 2.2 (Training): Develop and deliver role-based AI risk management training during Organize. NIST specifically includes partners, not just employees, in its training scope.
GOVERN 3 (Workforce Diversity) aligns with Organize and Calibrate:
-
GOVERN 3.1 (Diverse teams): NIST’s emphasis on diverse team composition for AI risk decisions maps to COMPEL’s Calibrate stage (assembling assessment teams) and Organize stage (structuring governance committees). Diversity here means demographic diversity, domain expertise diversity, and user experience diversity.
-
GOVERN 3.2 (Third-party policies): Policies for managing AI risks from third-party software and data map to the Organize stage (policy development) and Calibrate stage (vendor assessment).
GOVERN 4 (Organizational Culture) aligns with Evaluate and Learn:
-
GOVERN 4.1 (Testing and incident practices): Establish testing methodologies and incident management practices during Evaluate and Learn. NIST emphasizes “information sharing” — organizations should learn from incidents across the AI ecosystem, not just their own.
-
GOVERN 4.2 (Feedback and appeals): Implement feedback channels and appeal mechanisms during Produce and Learn. NIST’s emphasis on enabling AI actors to “report and address inconsistent performance” maps to COMPEL’s continuous improvement cycle.
GOVERN 5 (External Stakeholder Engagement) aligns with Evaluate and Learn:
- GOVERN 5.1 (External feedback integration): During Evaluate, establish channels for external stakeholders — users, affected communities, civil society organizations — to provide feedback on AI system impacts. During Learn, integrate that feedback into governance improvements.
GOVERN 6 (Workforce Impact) aligns with Organize and Calibrate:
- GOVERN 6.1 (Workforce-related AI risks): During Calibrate, assess how AI deployment will impact the workforce. During Organize, develop change management plans, retraining programs, and transition support.
MAP → Calibrate and Model Stages
The MAP function establishes what risks exist and what their characteristics are. This aligns primarily with COMPEL’s Calibrate stage (initial context setting) and Model stage (detailed characterization):
MAP 1 (Context Establishment) maps to the Calibrate stage:
-
MAP 1.1 (Intended purpose and context): The starting point of every COMPEL Calibrate cycle is understanding what the AI system is for, who will use it, how it will be deployed, and what the expected benefits and costs are. Document this in the purpose statement that initiates the COMPEL lifecycle.
-
MAP 1.2 (Interdisciplinary actors): Assemble diverse teams for the Calibrate assessment. NIST emphasizes that the team establishing context should reflect “demographic diversity and broad domain and user experience expertise.”
MAP 2 (AI System Characterization) maps to the Model stage:
-
MAP 2.1 (Scientific integrity): During the Model stage, document the scientific basis for AI method selection. Justify why the chosen approach (decision tree, neural network, ensemble method, large language model) is appropriate for the task. This is not a formality — it ensures that teams are not defaulting to trendy methods without considering alternatives.
-
MAP 2.2 (Knowledge limits): During the Model stage, characterize what the AI system cannot do. Document performance boundaries, failure modes, and edge cases. This is one of NIST’s most important contributions — the explicit requirement to document limitations, not just capabilities.
-
MAP 2.3 (System categorization): During Calibrate and Model, apply risk categorization to determine governance intensity. NIST’s categorization requirement aligns with COMPEL’s risk-tiered approach.
MAP 3 (Data Characterization) maps to the Model stage:
-
MAP 3.4 (Data fitness): During the Model stage, define and measure data quality criteria: representativeness (does the data reflect the target population?), relevance (is the data pertinent to the task?), accuracy (is the data correct?), and integrity (is the data complete and unaltered?).
-
MAP 3.5 (Privacy-enhanced techniques): During the Model stage, evaluate whether privacy-enhancing technologies — differential privacy, federated learning, synthetic data generation, homomorphic encryption — are appropriate for the use case.
MAP 5 (Impact Assessment) maps to Calibrate and Model:
- MAP 5.1 (Likelihood and magnitude): During Calibrate and Model, conduct quantitative risk assessments for each identified impact. NIST encourages reference to “past uses of similar systems” and “public incident reports” — the COMPEL risk assessment methodology includes benchmarking against AI incident databases.
MEASURE → Evaluate Stage
The MEASURE function is NIST’s evaluation and testing function, mapping directly to COMPEL’s Evaluate stage:
MEASURE 1 (Risk Measurement) maps to the Evaluate stage:
- MEASURE 1.1 (Measurement approaches): During Evaluate, select and apply appropriate risk measurement methods. NIST does not prescribe specific methods — it requires that “appropriate methods and metrics are identified and applied.” COMPEL’s Evaluate stage provides structured assessment frameworks that organizations can customize.
MEASURE 2 (System Evaluation) maps to the Evaluate stage:
-
MEASURE 2.1 (Performance evaluation): During Evaluate, test AI system performance using held-out test sets, structured evaluations, and where appropriate, external assessments. NIST emphasizes that evaluation rigor should be “in accordance with the AI system’s risk levels” — higher risk systems require more rigorous testing.
-
MEASURE 2.3 (Security and resilience): During Evaluate, conduct security testing including adversarial robustness evaluation. Test system behavior under adversarial inputs, edge cases, and stress conditions.
-
MEASURE 2.6 (Fairness assessment): During Evaluate, test for fairness concerns across protected characteristics. Implement bias mitigation strategies for identified disparities. Document methodology, findings, and remediation. This is one of the most operationally challenging NIST requirements and benefits from COMPEL’s structured evaluation framework.
-
MEASURE 2.7 (Validity and reliability): During Evaluate, validate AI system outputs against ground truth and assess reliability across conditions. Define acceptable thresholds and re-evaluation triggers.
MEASURE 3 (Risk Tracking) bridges Evaluate and Learn:
- MEASURE 3.1 (Tracking existing and emergent risks): Maintain a risk tracking register that captures existing risks, monitors for emergent risks, and identifies trends. Update during Evaluate cycles and continuously during Learn.
MEASURE 4 (Deployed System Measurement) bridges Evaluate and Learn:
- MEASURE 4.1 (Post-deployment measurement): Establish ongoing measurement approaches for deployed systems. This bridges the Evaluate stage (establishing measurement frameworks) and the Learn stage (executing continuous measurement).
MANAGE → Produce and Learn Stages
The MANAGE function treats identified risks and maintains systems post-deployment, mapping to COMPEL’s Produce stage (deployment decisions and controls) and Learn stage (ongoing management):
MANAGE 1 (Risk Treatment) maps to the Produce stage:
-
MANAGE 1.1 (Deployment decisions): During Produce, implement go/no-go decision gates. NIST frames this as “a determination as to whether the AI system achieves its intended purpose” — the deployment decision should be explicit, documented, and based on defined readiness criteria.
-
MANAGE 1.2 (Risk prioritization): During Produce, prioritize risk treatment actions based on the risk assessments from Calibrate and Model. Implement mitigation controls before deployment. Document risk acceptance decisions with appropriate authority.
MANAGE 2 (Resources and Alternatives) maps to the Produce stage:
-
MANAGE 2.1 (Resources for risk management): During Produce, allocate resources for ongoing risk management. NIST uniquely emphasizes consideration of “viable non-AI alternative systems” — if a non-AI approach can achieve the objective with less risk, it should be considered.
-
MANAGE 2.2 (Value sustainability): During Produce and Learn, implement mechanisms to sustain the value of deployed AI systems over time.
MANAGE 3 (Incidents and Errors) maps to the Learn stage:
- MANAGE 3.1 (Third-party risk monitoring): During Learn, continuously monitor risks from third-party AI components. Apply and document risk controls. Update assessments when vendors release updates or when vulnerabilities are discovered.
MANAGE 4 (Post-Deployment) maps to the Learn stage:
-
MANAGE 4.1 (Post-deployment monitoring): During Learn, implement comprehensive post-deployment monitoring. NIST specifies that monitoring should include “mechanisms for capturing and evaluating input from users and other relevant AI actors” — not just technical metrics.
-
MANAGE 4.2 (Continual improvement): During Learn, integrate measurable improvement activities into AI system updates. NIST emphasizes “regular engagement with interested parties” as part of improvement — improvement should be informed by stakeholder feedback, not just internal metrics.
US-Specific Compliance Considerations
Federal Agency Requirements
While the NIST AI RMF is voluntary, several federal actions have elevated its practical importance:
-
Executive Order 14110 (October 2023, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”) directs federal agencies to align their AI governance with the NIST AI RMF. Organizations providing AI systems to federal agencies must demonstrate NIST alignment.
-
OMB Memorandum M-24-10 (March 2024) requires federal agencies to implement AI governance practices aligned with the AI RMF for AI systems that impact rights or safety.
-
Federal Acquisition Regulation (FAR) updates are expected to incorporate NIST AI RMF alignment as a contractor requirement for AI-related procurements.
For organizations in the federal supply chain, NIST AI RMF alignment through COMPEL is not just best practice — it is increasingly a business requirement.
State-Level AI Regulation
US states are increasingly enacting AI-specific legislation. Colorado’s AI Act (SB 24-205), effective February 2026, requires “reasonable care” in deploying high-risk AI systems, with specific requirements around impact assessments, disclosure, and risk management that align with NIST AI RMF recommendations. Similar legislation has been proposed or enacted in Connecticut, Illinois, Texas, and other states.
COMPEL’s harmonization approach is particularly valuable in the US context because state regulations vary in scope and specificity but consistently align with NIST principles. An organization implementing NIST through COMPEL is well-positioned for state compliance.
Sector-Specific Regulators
US sector regulators are incorporating AI governance expectations aligned with the NIST AI RMF:
-
Financial services: The OCC, FDIC, Federal Reserve, and CFPB have issued guidance on AI risk management that references NIST principles. SR 11-7 (model risk management) is being interpreted to cover AI/ML models.
-
Healthcare: The FDA’s framework for AI/ML-based Software as a Medical Device (SaMD) aligns with NIST’s lifecycle approach to risk management.
-
Employment: The EEOC’s guidance on AI in employment decisions references bias assessment and transparency principles consistent with NIST MEASURE 2.6.
Organizations using COMPEL to implement the NIST AI RMF can extend their compliance to sector-specific requirements by adding the sector-specific layer on top of the NIST foundation.
Operationalizing NIST through COMPEL
Building the NIST Self-Assessment
NIST provides a self-assessment questionnaire (the AI RMF Playbook) that organizations can use to evaluate their maturity against each subcategory. COMPEL operationalizes this self-assessment through the Evaluate stage:
- Map each NIST subcategory to the COMPEL activities that address it
- Evaluate the maturity of each COMPEL activity (initial, managed, defined, quantitatively managed, optimizing)
- Identify gaps where COMPEL activities do not fully address NIST subcategories
- Prioritize gap remediation based on organizational risk priorities
- Document the self-assessment results and improvement plan
Creating NIST-Aligned Governance Artifacts
Each NIST function maps to specific governance artifacts that the COMPEL lifecycle produces:
GOVERN artifacts: AI policy, RACI matrix, regulatory register, training program, stakeholder engagement plan, feedback and appeal mechanisms, workforce impact assessment, third-party risk policy.
MAP artifacts: Purpose statement, context of use document, system categorization record, data fitness assessment, knowledge limits documentation, impact assessment, privacy analysis.
MEASURE artifacts: Test strategy, performance benchmarks, fairness evaluation report, security assessment, validity and reliability analysis, risk tracking register, post-deployment measurement plan.
MANAGE artifacts: Deployment readiness assessment, go/no-go decision record, risk treatment plan, resource allocation plan, incident response procedures, post-deployment monitoring plan, improvement log.
Demonstrating NIST Alignment to Stakeholders
Unlike ISO 42001, there is no formal NIST AI RMF certification. Organizations demonstrate alignment through:
- Self-assessment documentation: Publish or share the completed self-assessment showing maturity levels across all subcategories
- Governance artifact portfolio: Maintain a repository of governance artifacts organized by NIST function
- Continuous improvement evidence: Show how the organization has improved its NIST alignment over time
- Third-party validation: Engage independent assessors to review NIST alignment (not certification, but expert opinion)
- Customer and partner communication: Include NIST alignment statements in proposals, contracts, and public-facing AI governance disclosures
Connecting NIST to the Broader Harmonization Framework
For organizations implementing multiple frameworks through COMPEL, the NIST AI RMF occupies a strategic position:
- It shares 48 requirements with the EU AI Act (the highest bilateral overlap)
- It shares 38 requirements with ISO 42001
- It covers 20 of the OECD’s 22 requirements
- It aligns with 30 of Singapore’s 35 requirements
An organization that has implemented NIST through COMPEL has addressed more than two-thirds of the requirements of every other major framework. This makes NIST an excellent starting point for organizations building a multi-framework compliance program — particularly US-based organizations for whom NIST is the primary reference.
Key Takeaways
The NIST AI Risk Management Framework provides the most comprehensive voluntary guidance for AI risk management available. Its four-function architecture (GOVERN, MAP, MEASURE, MANAGE) maps cleanly to the COMPEL lifecycle: GOVERN spans all stages as a cross-cutting function, MAP aligns with Calibrate and Model, MEASURE aligns with Evaluate, and MANAGE aligns with Produce and Learn.
For US-based organizations, NIST AI RMF alignment is increasingly expected by federal agencies, state regulators, and sector-specific authorities. COMPEL provides the operational methodology to move from NIST’s principles and guidance to implemented, evidence-producing governance activities. And through the COMPEL harmonization approach, NIST implementation creates a foundation that extends naturally to EU AI Act compliance, ISO 42001 certification, and alignment with international frameworks.