This article describes the framework’s structure, the implementation roadmap that takes an organisation from current state to a working AI RMF program, and the relationship between AI RMF and other frameworks an organisation may already operate.
The Four Functions
Govern
The cross-cutting function that establishes the organisational culture, policies, processes, and procedures for AI risk management. Outcomes include written AI strategy, risk management policy, accountability structures, and the workforce competencies that enable the other three functions.
Map
The function that establishes the context and identifies risks. For each AI system, Map captures intended purpose, business value, beneficiaries, potentially affected groups, lawful basis, and the specific risks (technical, operational, human factors, ethical) the system raises.
Measure
The function that analyses and assesses risks using qualitative and quantitative methods. Measurement spans pre-deployment evaluation, ongoing monitoring, and explicit attention to socio-technical risks (bias, fairness, transparency, accountability, robustness).
Manage
The function that allocates resources to risks and implements treatments. Manage covers risk prioritisation, treatment selection (mitigate, transfer, accept, avoid), implementation, and monitoring of treatment effectiveness.
The four functions operate in continuous cycle, not linear sequence. Each function feeds the others; new evidence in Measure can re-trigger Map; new context in Govern can re-prioritise Manage.
The Generative AI Profile
The AI RMF Generative AI Profile (NIST AI 600-1), available at https://airc.nist.gov/AI_RMF_Knowledge_Base/Playbook/GenAI_Profile, addresses the distinctive risks of generative systems including hallucination, harmful content generation, intellectual property exposure, malicious use, environmental impact, and the cascading risks of dependence on a small number of upstream model providers. Organisations deploying generative AI should treat the GenAI Profile as a mandatory companion to the base framework.
Implementation Roadmap: Year One
A first-year implementation roadmap typically progresses through four quarters of structured activity.
Quarter 1: Govern Foundation
- Charter the AI RMF program with executive sponsorship.
- Inventory AI systems currently in use or in development.
- Draft an AI risk management policy aligned with AI RMF expectations.
- Stand up the cross-functional AI governance forum (or reposition an existing committee).
- Identify named owners for the four functions.
- Conduct AI literacy assessments for the staff most directly accountable.
Quarter 2: Map for Highest-Materiality Systems
- Select 5 to 10 highest-materiality AI systems for full Map workshops.
- Conduct Map workshops with cross-functional participation, capturing context, beneficiaries, affected parties, and identified risks.
- Document outputs in standard templates that scale.
- Surface initial risk themes that recur across systems for portfolio attention.
Quarter 3: Measure on Mapped Systems
- Define measurement plans for each Mapped system, covering performance, fairness, robustness, security, and privacy.
- Execute pre-defined measurement.
- Document results in model cards or equivalent (per Module 1.23).
- Identify measurement gaps that warrant tooling investment.
Quarter 4: Manage and Cycle Closure
- Prioritise risks based on Measure results.
- Apply risk treatments (technical mitigations, governance controls, compensating controls, acceptance with conditions, retirement).
- Document the treatments and the residual risk position.
- Conduct a year-end management review of the AI RMF program, including completed-cycle metrics and proposed second-year scope expansion.
Implementation Roadmap: Years Two and Three
The second year typically focuses on coverage expansion and process maturation.
- Extend Map and Measure coverage to all material AI systems (not just the initial 5 to 10).
- Automate measurement where feasible (continuous monitoring, automated bias scans, drift detection).
- Integrate AI RMF outputs with adjacent functions: model risk management, third-party risk management, privacy, security.
- Build the Generative AI Profile coverage as generative systems enter production.
- Expand workforce literacy beyond the initial accountable roles to broader staff.
The third year typically focuses on optimisation and external attestation.
- Pursue ISO 42001 certification (per the previous article) leveraging the AI RMF foundation.
- Engage with industry working groups and contribute to evolving best practice.
- Publish externally a transparency report or AI governance summary.
- Drive measurement and management to higher levels of automation, freeing human attention for the harder judgement calls.
Integration with Other Frameworks
A well-implemented AI RMF program typically integrates with several other frameworks.
ISO/IEC 42001. The two frameworks are highly compatible. The NIST RMF Crosswalk to ISO/IEC 42001 at https://airc.nist.gov/AI_RMF_Knowledge_Base/Crosswalks shows the mapping.
NIST Cybersecurity Framework. The CSF at https://www.nist.gov/cyberframework provides the security baseline; AI-specific extensions are needed but the foundational structure is shared.
NIST Privacy Framework. At https://www.nist.gov/privacy-framework, addresses the privacy dimension that interacts with AI use of personal data.
EU AI Act. The AI RMF can serve as the operational backbone for EU AI Act conformity, with specific gaps (notified body, declaration of conformity, database registration) requiring additional treatment.
Sector-specific frameworks. Financial services (Federal Reserve SR 11-7), healthcare (FDA Software as a Medical Device), and others can be addressed within the AI RMF structure.
Sector-Specific Profiles
Beyond the Generative AI Profile, NIST and partner organisations have developed or are developing sector-specific profiles. Healthcare-specific guidance is available at https://www.nist.gov/itl/ai-risk-management-framework/playbook for relevant industries. Organisations operating in regulated sectors should consult both the base framework and the sector-specific profile.
Operational Practices
Single inventory. The AI inventory should be a single source of truth, used by AI RMF, vendor management, regulatory submissions, and finance.
Standardised templates. Map workshop outputs, Measure plans, and Manage records should follow standardised templates that scale across the program.
Measurement automation. Manual measurement does not scale. Automation investment in performance monitoring, fairness scanning, and drift detection produces compounding value.
Cross-functional staffing. AI RMF cannot be operated by a single function. The cross-functional model with named owners across data science, engineering, risk, legal, ethics, security, and business is essential.
Regular maturity review. The AI RMF program should be evaluated against its own maturity model annually, with the maturity output feeding investment decisions.
Common Failure Modes
The first is framework adoption without operational change — the AI RMF terminology is adopted but underlying practice does not improve. Counter by tying framework adoption to specific operational deliverables.
The second is coverage gaps — the framework is applied only to high-profile systems while the long tail of smaller systems escapes attention. Counter with portfolio-wide expectations even if depth varies.
The third is Generative AI under-attention — the GenAI Profile is treated as supplementary when in fact it addresses the highest-velocity risk surface. Counter by treating GenAI Profile coverage as a first-tier program priority.
The fourth is under-staffed measurement — Map and Manage are populated but Measure remains thin because measurement is hard. Counter with explicit measurement investment and tooling.
Looking Forward
Module 1.27 closes here. Module 1.28 turns to industry-specific AI patterns — the ways in which the universal frameworks discussed in this module manifest differently in financial services, healthcare, manufacturing, retail, and the public sector. Understanding the universals (this module) and the specifics (next module) together is what produces credible AI governance.
© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.