Skip to main content
AITGP M9.2-Art01 v1.0 Reviewed 2026-04-06 Open Access
M9.2 M9.2
AITGP · Governance Professional

AI Governance RACI Matrix for Enterprises: Decision Rights Across 30 Activities and 12 Roles

AI Governance RACI Matrix for Enterprises: Decision Rights Across 30 Activities and 12 Roles — Transformation Design & Program Architecture — Advanced depth — COMPEL Body of Knowledge.

21 min read Article 1 of 1

COMPEL Body of Knowledge — Operating Model Series Cluster B Flagship Article — Decision Rights and Accountability


Why RACI for AI {#why}

Responsibility Assignment Matrices (RACI) have been a staple of enterprise governance for decades. Most large organizations already maintain an IT RACI, a data-governance RACI, and a project-management RACI. So a reasonable first instinct, when standing up AI governance, is to extend what already exists.

That instinct is wrong. Generic RACIs consistently fail when applied to AI programs, for four structural reasons.

1. AI accountability is plural. A traditional IT system has one owner — the CIO or an application owner — who is accountable for uptime, security, and change control. An AI system has at least four simultaneous owners: a business owner (accountable for the business outcome), a Chief AI Officer (accountable for the governance process), a Chief Risk Officer (accountable for residual risk posture), and a DPO or General Counsel (accountable for legal and regulatory conformance). Forcing a single “A” onto a matrix designed for IT misstates reality and creates liability gaps.

2. AI risks cross functional boundaries. Bias is a legal risk, a brand risk, and an ML engineering risk. Hallucination is a product risk, a customer-trust risk, and a data-quality risk. An autonomous agent taking a wrong action is simultaneously a safety, security, and compliance incident. A generic IT RACI does not have cells where Legal, Product, ML, and Security all sit together as Consulted parties — AI governance needs exactly that shape.

3. AI systems change continuously. A deployed model drifts. A prompt is updated weekly. A retrieval index is refreshed daily. Retraining pushes the system into new performance territory between formal release windows. RACIs built for “release once, operate steady-state” do not define who decides when drift crosses the threshold for re-approval, who authorizes a rollback, or who signs off on a retraining run. An AI RACI must cover monitoring decisions as first-class activities, not footnotes to deployment.

4. AI decision rights must survive an audit. ISO/IEC 42001 clause 5.3 and NIST AI RMF GOVERN 2 both require that roles and responsibilities be documented, communicated, and enforced. The EU AI Act Article 17 demands a quality management system that names the roles accountable for each requirement. A RACI that exists only in a slide deck will not pass certification audit. It must be versioned, signed, and traceable to the activities it governs.

A purpose-built AI governance RACI therefore:

  • Names at least a dozen distinct roles, most of which do not appear on a conventional IT RACI.
  • Decomposes the AI lifecycle into enough activities to make ambiguous hand-offs explicit.
  • Allows more than one “A” at the program level by splitting activities until each has a single accountable role.
  • Defines escalation thresholds so that decisions move up the chain when risk, budget, or regulatory exposure cross defined triggers.

The rest of this article gives you a reference matrix across 12 roles and 30 activities, the decision rights per COMPEL stage, the escalation thresholds, and a template-customization guide so you can ship this into your organization without starting from a blank page.

The 12 roles {#roles}

These are the roles that appear in a mature enterprise AI RACI. Small organizations collapse several into one person; regulated enterprises often split further. Names may vary — the function is what matters.

1. Chief AI Officer (CAIO). Owns the enterprise AI strategy, the AI policy framework, and the central governance process. The CAIO is the executive accountable for the overall AI program and typically chairs the AI governance council. In organizations without a CAIO, this function sits with the CIO, CDO, or CTO.

2. Chief Information Security Officer (CISO). Owns AI-specific security controls — model-weight protection, prompt-injection defense, adversarial testing, supply-chain security for model artifacts, and incident response for AI-driven breaches. Consulted on any AI system that touches sensitive data or external surfaces.

3. Data Protection Officer (DPO). Owns privacy conformance: GDPR lawful-basis determination, DPIA approval, data-subject rights handling, cross-border transfer controls, and regulator liaison for privacy incidents. Accountable for privacy-by-design sign-off on AI systems processing personal data.

4. General Counsel (GC). Owns legal risk across contracts, IP, regulatory interpretation, litigation exposure, and regulator liaison for non-privacy matters (FTC, sectoral regulators, EU AI Act supervisory authorities). Approves customer-facing disclosures and AI terms of service.

5. Center of Excellence (CoE) Lead. Operates the central AI governance, standards, and enablement team. Responsible for maintaining the AI playbook, the model registry, the training curriculum, and the pattern library. Does most of the Responsible (R) work on policy, standards, and review cadences.

6. Business Unit AI Owner. The executive or senior leader in the business unit where the AI system creates or destroys value. Accountable for the business case, benefit realization, residual risk acceptance, and business-continuity plan for each AI system in their portfolio. There are typically several Business Unit AI Owners across a large enterprise.

7. ML / Data Engineering Lead. Owns model building, data pipelines, feature stores, evaluation infrastructure, MLOps, and model-card production. Responsible for nearly every technical artifact. Works closely with the CoE on standards and with the Business Unit AI Owner on requirements.

8. Chief Risk Officer (CRO). Owns enterprise risk posture and risk appetite. Accountable for ensuring AI risks are captured on the enterprise risk register, aggregated across systems, and reported to the board. Approves risk-tier classifications and any exception that breaches risk appetite.

9. Internal Audit. Provides independent assurance over the AI governance program — both process conformance (is the policy being followed?) and control effectiveness (do the controls actually mitigate the stated risks?). Reports to the Audit Committee of the Board. Consulted during design; Responsible for audit execution.

10. Head of Compliance. Owns regulatory-obligation mapping, control-to-regulation traceability, attestations, and external reporting (EU AI Act registration, sectoral regulator filings, US state AI disclosures). Partners with DPO and GC on specific regulatory regimes.

11. Head of Product. Owns the user-facing expression of AI systems — disclosures, consent flows, feedback mechanisms, trust UX, accessibility, and customer communication during incidents. Accountable for user experience and brand risk of AI features.

12. Board AI Committee (or Audit/Risk Committee with AI remit). The board-level oversight body. Approves the AI policy, the risk appetite statement, and any AI system whose risk tier exceeds the threshold requiring board review. Receives quarterly AI portfolio and risk reports.

The 30 activities and RACI assignments {#matrix}

The matrix below organizes 30 core AI governance activities into six lifecycle phases. Each row shows the Responsible (R), Accountable (A), Consulted (C), and Informed (I) assignments. Each activity has exactly one A and usually one or two R’s. The full column list (roles) is CAIO · CISO · DPO · GC · CoE · BU · ML · CRO · IA · Comp · Prod · Board.

Strategy and policy (6 activities)

#ActivityRACI
1AI strategy definitionCAIO, CoECAIOCRO, GC, Comp, BUBoard, ML, Prod
2AI policy approvalCoEBoardCAIO, CRO, GC, DPO, CompCISO, IA, BU, ML, Prod
3AI ethics principlesCoE, ProdCAIOGC, DPO, BU, external stakeholdersBoard, CISO, CRO, Comp, IA, ML
4Risk appetite statementCROBoardCAIO, GC, Comp, BUCoE, CISO, DPO, IA, ML, Prod
5Use-case intake processCoECAIOCRO, GC, DPO, CISO, BUBoard, IA, Comp, ML, Prod
6Board AI reporting cadenceCAIO, CoEBoardCRO, IA, CompCISO, DPO, GC, BU, ML, Prod

The board is Accountable for the AI policy and the risk appetite — those are governance instruments the board must own. The CAIO is Accountable for the strategy and reporting that operationalizes them.

Use-case gating (4 activities)

#ActivityRACI
7Intake evaluationCoECAIOBU, DPO, GC, CISOCRO, Comp, ML, Prod
8Risk classificationCoECRODPO, GC, CISO, BUCAIO, IA, Comp, ML, Prod
9Gate 1 approval (concept)CoE, BUCAIOCRO, DPO, GC, CompBoard, CISO, IA, ML, Prod
10Gate 2 approval (pre-build)CoE, BUCAIOCISO, DPO, GC, ML, CompBoard, CRO, IA, Prod

The CRO owns risk classification because the tier determines enterprise risk treatment. The CAIO owns each gate — the governance body makes go/no-go decisions. For high-risk tier systems, the Board becomes Accountable for Gate 2 (see escalation thresholds).

Data and model (5 activities)

#ActivityRACI
11Dataset approvalMLBUDPO, CISO, CoE, GCCAIO, CRO, IA, Comp, Prod
12Data residency decisionsDPOBUCISO, GC, Comp, CoECAIO, CRO, IA, ML, Prod
13Model selectionML, CoEBUCISO, CAIO, CompCRO, DPO, GC, IA, Prod
14Vendor model procurementML, CoEBUCISO, GC, DPO, Comp, CROCAIO, IA, Prod
15Model card approvalMLBUCoE, DPO, GCCAIO, CISO, CRO, IA, Comp, Prod

The Business Unit AI Owner is Accountable for data and model decisions because those decisions determine the risk profile of the system they own. The DPO and CISO are Consulted on every one — not optional.

Deployment (4 activities)

#ActivityRACI
16Pre-deployment reviewCoE, MLCAIOCISO, DPO, GC, BU, CRO, CompBoard, IA, Prod
17Production releaseML, BUBUCoE, CISO, ProdCAIO, CRO, DPO, GC, IA, Comp
18Rollback decisionML, BUBUCAIO, CISO, CoE, ProdCRO, DPO, GC, IA, Comp, Board
19HITL threshold settingBU, MLBUCoE, GC, Comp, ProdCAIO, CISO, DPO, CRO, IA

Rollback is intentionally assigned to the Business Unit AI Owner. When a production AI system is misbehaving, the business owner has to balance continuity of service against risk exposure. Central governance can require rollback via the escalation path, but the day-to-day trigger sits with the owner who carries the business consequence.

Monitoring (4 activities)

#ActivityRACI
20Performance threshold settingML, CoEBUCAIO, CRO, ProdCISO, DPO, GC, IA, Comp
21Anomaly investigationMLBUCoE, CISO, DPO, ProdCAIO, CRO, GC, IA, Comp
22Drift decision (retrain/retire)ML, BUBUCoE, CAIO, CompCRO, CISO, DPO, GC, IA, Prod
23Monitoring dashboard ownershipCoE, MLCAIOBU, CRO, IACISO, DPO, GC, Comp, Prod, Board

The dashboard is a central governance asset — the CAIO owns it. Individual thresholds and investigations are per-system — the Business Unit AI Owner owns those.

Incident (3 activities)

#ActivityRACI
24Incident triageML, CISOCAIOBU, DPO, GC, CoE, ProdCRO, IA, Comp, Board
25Customer communicationProd, GCBUCAIO, DPO, CompCISO, CRO, IA, ML, Board
26Regulatory notificationComp, DPOGCCAIO, CRO, CISO, BUIA, ML, Prod, Board

Regulatory notification is uniquely assigned to the General Counsel as Accountable because these filings carry legal and attorney-client privilege implications that can only sit with the GC. Privacy breaches specifically may shift A to the DPO depending on jurisdiction; document this explicitly if so.

Audit and review (4 activities)

#ActivityRACI
27Internal auditIAAudit CommitteeCAIO, CRO, Comp, CoECISO, DPO, GC, BU, ML, Prod
28Certification audit (ISO 42001)CoE, CompCAIOIA, CRO, CISO, DPO, GC, BU, MLBoard, Prod
29Annual policy reviewCoECAIOAll rolesBoard
30Training record reviewCoECAIOHR, BU, IA, CompCISO, DPO, GC, CRO, ML, Prod

Internal Audit reports functionally to the Audit Committee, not to the CAIO, which is why the A lies with the Audit Committee for activity 27. This independence is required by most corporate-governance standards and by ISO 42001 clause 9.2.

Decision rights per lifecycle stage {#decision-rights}

The matrix above maps to the six COMPEL stages. Use this as a quick mental model of who decides what, at which stage of an AI system’s life.

Calibrate. The CAIO and the Board set strategy, policy, and risk appetite. The CRO classifies risk. Intake decisions start here. Primary decision authority is central governance.

Organize. The CoE Lead operationalizes the policy into standards, RACIs, training, and the model registry. The CAIO approves these as they become live. Role: build the operating system.

Model. The Business Unit AI Owner, backed by the ML Engineering Lead, makes the decisions that define the system’s risk profile: dataset, model, vendor, architecture. The CoE, DPO, and CISO are Consulted on every material call. This is where Accountability shifts from central to business-unit.

Produce. The Business Unit AI Owner releases into production. The CAIO approves pre-deployment review. The ML Engineering Lead executes the release. Product owns user-facing disclosures. This stage has the most concurrent decision-makers — use escalation thresholds aggressively.

Evaluate. The ML Engineering Lead owns monitoring. The Business Unit AI Owner owns thresholds and responses. The CAIO owns the dashboard. Anomalies and drift are Business Unit decisions unless they breach escalation thresholds.

Learn. Incidents, audits, and policy revision. The CAIO is Accountable for incidents at the program level, while the Business Unit AI Owner is Accountable for system-level customer communication. The Audit Committee is Accountable for independent audit. Findings feed the annual policy review.

Escalation thresholds {#escalation}

Decisions do not stay at the default RACI level when the stakes change. The matrix below defines when decision rights escalate.

TriggerMoves A fromMoves A toTiming
Risk tier classified as High or Prohibited (EU AI Act Article 6 or internal tiering)Business Unit AI OwnerCAIO (+ Board notification)At Gate 1
Risk tier classified as UnacceptableCAIOBoard (go/no-go vote)Before Gate 2
Budget request exceeds $X (org-specific, often $1M per system or $5M per program)Business Unit AI OwnerCAIO + CFOAt funding decision
AI system processes special-category personal data (Article 9 GDPR)Business Unit AI OwnerDPO + Business Unit AI Owner (joint A)At intake
AI system in regulated industry use (medical device, credit decision, employment)Business Unit AI OwnerGC + Business Unit AI Owner (joint A) + sector regulator notificationAt intake
Incident with customer harm, regulator notification, or press exposureBusiness Unit AI OwnerCAIO + GC (+ Board within 24h)On triage
Material model change (new base model, new training data class, new deployment region)ML Engineering LeadRe-trigger Gate 2 with CAIO as ABefore change
Risk appetite breach (aggregated across AI portfolio)CROBoard (risk appetite revision or portfolio rebalance)At quarterly review
Certification nonconformity (ISO 42001 major nonconformity)CoECAIO + Audit CommitteeWithin audit cycle
AI vendor incident affecting your deployed systemBusiness Unit AI OwnerCAIO + CISO + GC (joint triage)On notification

Escalation thresholds are not optional overlays on top of the RACI — they are part of the RACI. Document the triggers, the new Accountable role, and the required timing. Train every named role on them. Rehearse via tabletop exercises at least annually.

Template usage guidance {#template}

Do not copy this matrix verbatim. Customize for your organization using these steps.

Step 1 — Rename roles to match your org chart. If you do not have a CAIO, decide whether the function lives with the CIO, CDO, CTO, or a committee. Name the actual role. Similarly for DPO (some organizations have a Privacy Office instead), Product (some have separate Product and CX), and Board committee (Audit, Risk, Technology, or dedicated AI).

Step 2 — Add or merge activities. If your organization has distinct gates (for example, a separate Gate 0 for feasibility or a Gate 3 for post-deployment re-authorization), split the activity accordingly. If you operate in a single regulated industry, add activities for sector-specific filings (medical device submissions, credit-model validation memos). If you are small, merge strategy and policy into a single activity with the Board as A.

Step 3 — Walk each row with the named role. Do not publish a RACI by fiat. Sit with each named role and confirm: (a) they understand the activity, (b) they accept the assignment, (c) they have the resources and authority to execute it, (d) they know the escalation triggers. Any disagreement is a signal the matrix needs revision — or the role needs restructuring.

Step 4 — Version and sign. Store the RACI as a controlled document. Require sign-off by each named role. Version with each revision. Link the RACI to the AI policy (which references it) and the ISO 42001 management-system documentation (clause 5.3 requires role assignment evidence).

Step 5 — Test via tabletop. Run at least two tabletop exercises per year. Typical scenarios: a hallucination incident with customer harm, a drift event crossing a performance threshold, a regulator request for EU AI Act documentation, a vendor model deprecation. At each scenario, ask: who decides? Who is informed? What escalates? Update the matrix based on gaps found.

Step 6 — Align to measurement. Every activity should have a metric — completion rate, timeliness, review quality. If an activity cannot be measured, reconsider whether it belongs on the RACI or whether it is aspirational rather than operational.

COMPEL stage mapping {#compel-mapping}

COMPEL stageRACI activities (IDs)Primary accountable roles
Calibrate1, 4, 5, 6, 7, 8CAIO, Board, CRO
Organize2, 3, 9, 10Board, CAIO
Model11, 12, 13, 14, 15Business Unit AI Owner
Produce16, 17, 18, 19CAIO, Business Unit AI Owner
Evaluate20, 21, 22, 23Business Unit AI Owner, CAIO
Learn24, 25, 26, 27, 28, 29, 30CAIO, BU, GC, Audit Committee

The pattern is clear: central governance dominates the early stages (Calibrate, Organize). Business units dominate the middle (Model, Produce, Evaluate). Central governance, the GC, and the Audit Committee dominate the closing stage (Learn). The RACI formalizes this arc.

Evidence artifacts {#evidence}

A functioning RACI produces these artifacts. Each is required by ISO 42001 clause 5.3 and by NIST AI RMF GOVERN 2:

  • The signed RACI document itself, with version history.
  • Role descriptions for each of the 12 roles, including authority, resources, and success criteria.
  • Training records showing each named role has completed AI governance training appropriate to their activities.
  • Meeting minutes from the governance forums where the RACI is referenced (AI council, gate reviews, board AI committee).
  • Escalation log — every time a decision was escalated, which trigger fired, who became accountable, and the outcome.
  • Annual review minutes showing the RACI was reviewed and updated.
  • Tabletop exercise reports showing the RACI was tested.
  • Exception log — any deviation from the RACI, with reason and approval.

Retain these for the life of the AI program plus regulatory retention periods — typically ten years for EU AI Act high-risk systems and seven years for most financial-services regulators.

Metrics {#metrics}

Track the RACI’s effectiveness with these measures:

  • RACI coverage — percentage of in-scope AI activities that have a named R and A. Target 100%.
  • Role coverage — percentage of named roles that have a current, signed acceptance. Target 100%.
  • Escalation latency — median time from trigger to new Accountable role being engaged. Target under 24 hours for incidents; under 5 business days for risk-appetite breaches.
  • Gate throughput — number of AI systems passing each gate per quarter, split by time-in-gate. Rising time-in-gate signals bottleneck at a named role.
  • Decision traceability — percentage of gate decisions with documented Consulted-role input and Accountable-role sign-off. Target 100%.
  • Tabletop findings closure rate — percentage of tabletop-identified gaps closed within agreed timeframe.
  • Training compliance — percentage of named roles current on AI governance training. Target 100%.
  • Audit findings on role clarity — number of internal or certification audit findings citing unclear roles. Target zero.

Risks if skipped {#risks}

Organizations that skip the RACI step routinely encounter:

  • Accountability gaps. An incident occurs, and no one is clearly accountable. The investigation stalls while roles are debated. Regulators and customers interpret the delay as obfuscation.
  • Duplicated decision rights. Two roles believe they have the final say (for example, the CIO and the CAIO, or the DPO and the GC). Decisions loop or are made twice with different outcomes.
  • Hidden single points of failure. The ML Engineering Lead is the de facto R on everything, including decisions they should not make. When that person leaves, the program stalls.
  • Audit findings. ISO 42001 clause 5.3 and NIST AI RMF GOVERN 2.1 both require documented, communicated, enforced role assignments. A missing or stale RACI is a standard audit finding.
  • Regulatory exposure. The EU AI Act Article 17 requires named accountable persons in the quality management system. A vague RACI creates personal legal exposure for senior executives and enterprise exposure under the Act.
  • Slow escalation. Without thresholds, an incident that should escalate to the board in 24 hours instead loops through email threads for a week. The organization loses both the window to act and the trust of customers.
  • Board dissatisfaction. The board cannot discharge its oversight duty if it does not know which decisions it owns. An AI RACI that names Board-accountable activities clarifies the board’s agenda.

References {#references}

  • ISO/IEC 42001:2023 — Information technology — Artificial intelligence — Management systemiso.org/standard/81230.html. Clause 5.3 (Roles, responsibilities, and authorities) and Annex A.3.2.
  • NIST AI Risk Management Framework 1.0nist.gov/itl/ai-risk-management-framework. GOVERN 2 (Roles, responsibilities, and communications).
  • EU AI Act (Regulation 2024/1689)eur-lex.europa.eu. Article 17 (Quality management system) and Article 26 (obligations of deployers).
  • OECD AI Principlesoecd.org/going-digital/ai/principles. Accountability principle as the basis for named-role governance.
  • COBIT 2019 — Responsibility Assignment Matrix guidance — ISACA. General RACI construction methodology, adapted here for AI specifics.
  • Singapore Model AI Governance Framework (PDPC)pdpc.gov.sg. Section on internal governance structures.

How to cite

COMPEL FlowRidge Team. (2026). “AI Governance RACI Matrix for Enterprises: Decision Rights Across 30 Activities and 12 Roles.” COMPEL Framework by FlowRidge. https://www.compelframework.org/articles/seo-b1-ai-governance-raci-matrix-for-enterprises/

Frequently Asked Questions

Why can't we reuse our existing IT or data-governance RACI for AI?
Generic IT RACIs assume static systems, deterministic behavior, and a single accountable CIO. AI systems drift, behave probabilistically, and carry risks (bias, hallucination, autonomy) that span legal, ethical, safety, and security domains. An AI RACI must split accountability across a CAIO, CISO, DPO, CRO, and business owner — a distinction most legacy RACIs never had to draw.
Who should be "Accountable" for a high-risk AI system going into production?
The Business Unit AI Owner is Accountable for the business outcome and residual risk acceptance. The Chief AI Officer is Accountable for the governance process that approved it. The Board AI Committee is Accountable for enterprise-level risk appetite. Only one "A" per activity — the matrix keeps these distinct.
How many roles is too many on an AI RACI?
If more than five roles appear for any single activity, the activity is probably too coarse — break it into sub-activities. If fewer than three appear, you are likely missing a Consulted or Informed party (legal, risk, or security is almost always at least Consulted on material AI decisions).
Does the RACI change when the AI system is vendor-supplied versus built in-house?
Yes. For vendor models, Procurement and Third-Party Risk Management become Responsible for vendor due diligence, and the ML Engineering Lead shifts from builder to integrator. The Accountable role (Business Unit AI Owner) does not change — accountability for business outcome follows the use case, not the supply path.
How often should the RACI be reviewed?
At minimum annually, aligned with AI policy review. Trigger-based reviews are also required on any of: new regulation (EU AI Act milestone, state law), material incident, new AI system class (agentic, foundation model), or reorganization that changes a named role. Version the matrix and retain prior versions for audit.