COMPEL Body of Knowledge — Operating Model Series Cluster B Flagship Article — Decision Rights and Accountability
Why RACI for AI {#why}
Responsibility Assignment Matrices (RACI) have been a staple of enterprise governance for decades. Most large organizations already maintain an IT RACI, a data-governance RACI, and a project-management RACI. So a reasonable first instinct, when standing up AI governance, is to extend what already exists.
That instinct is wrong. Generic RACIs consistently fail when applied to AI programs, for four structural reasons.
1. AI accountability is plural. A traditional IT system has one owner — the CIO or an application owner — who is accountable for uptime, security, and change control. An AI system has at least four simultaneous owners: a business owner (accountable for the business outcome), a Chief AI Officer (accountable for the governance process), a Chief Risk Officer (accountable for residual risk posture), and a DPO or General Counsel (accountable for legal and regulatory conformance). Forcing a single “A” onto a matrix designed for IT misstates reality and creates liability gaps.
2. AI risks cross functional boundaries. Bias is a legal risk, a brand risk, and an ML engineering risk. Hallucination is a product risk, a customer-trust risk, and a data-quality risk. An autonomous agent taking a wrong action is simultaneously a safety, security, and compliance incident. A generic IT RACI does not have cells where Legal, Product, ML, and Security all sit together as Consulted parties — AI governance needs exactly that shape.
3. AI systems change continuously. A deployed model drifts. A prompt is updated weekly. A retrieval index is refreshed daily. Retraining pushes the system into new performance territory between formal release windows. RACIs built for “release once, operate steady-state” do not define who decides when drift crosses the threshold for re-approval, who authorizes a rollback, or who signs off on a retraining run. An AI RACI must cover monitoring decisions as first-class activities, not footnotes to deployment.
4. AI decision rights must survive an audit. ISO/IEC 42001 clause 5.3 and NIST AI RMF GOVERN 2 both require that roles and responsibilities be documented, communicated, and enforced. The EU AI Act Article 17 demands a quality management system that names the roles accountable for each requirement. A RACI that exists only in a slide deck will not pass certification audit. It must be versioned, signed, and traceable to the activities it governs.
A purpose-built AI governance RACI therefore:
- Names at least a dozen distinct roles, most of which do not appear on a conventional IT RACI.
- Decomposes the AI lifecycle into enough activities to make ambiguous hand-offs explicit.
- Allows more than one “A” at the program level by splitting activities until each has a single accountable role.
- Defines escalation thresholds so that decisions move up the chain when risk, budget, or regulatory exposure cross defined triggers.
The rest of this article gives you a reference matrix across 12 roles and 30 activities, the decision rights per COMPEL stage, the escalation thresholds, and a template-customization guide so you can ship this into your organization without starting from a blank page.
The 12 roles {#roles}
These are the roles that appear in a mature enterprise AI RACI. Small organizations collapse several into one person; regulated enterprises often split further. Names may vary — the function is what matters.
1. Chief AI Officer (CAIO). Owns the enterprise AI strategy, the AI policy framework, and the central governance process. The CAIO is the executive accountable for the overall AI program and typically chairs the AI governance council. In organizations without a CAIO, this function sits with the CIO, CDO, or CTO.
2. Chief Information Security Officer (CISO). Owns AI-specific security controls — model-weight protection, prompt-injection defense, adversarial testing, supply-chain security for model artifacts, and incident response for AI-driven breaches. Consulted on any AI system that touches sensitive data or external surfaces.
3. Data Protection Officer (DPO). Owns privacy conformance: GDPR lawful-basis determination, DPIA approval, data-subject rights handling, cross-border transfer controls, and regulator liaison for privacy incidents. Accountable for privacy-by-design sign-off on AI systems processing personal data.
4. General Counsel (GC). Owns legal risk across contracts, IP, regulatory interpretation, litigation exposure, and regulator liaison for non-privacy matters (FTC, sectoral regulators, EU AI Act supervisory authorities). Approves customer-facing disclosures and AI terms of service.
5. Center of Excellence (CoE) Lead. Operates the central AI governance, standards, and enablement team. Responsible for maintaining the AI playbook, the model registry, the training curriculum, and the pattern library. Does most of the Responsible (R) work on policy, standards, and review cadences.
6. Business Unit AI Owner. The executive or senior leader in the business unit where the AI system creates or destroys value. Accountable for the business case, benefit realization, residual risk acceptance, and business-continuity plan for each AI system in their portfolio. There are typically several Business Unit AI Owners across a large enterprise.
7. ML / Data Engineering Lead. Owns model building, data pipelines, feature stores, evaluation infrastructure, MLOps, and model-card production. Responsible for nearly every technical artifact. Works closely with the CoE on standards and with the Business Unit AI Owner on requirements.
8. Chief Risk Officer (CRO). Owns enterprise risk posture and risk appetite. Accountable for ensuring AI risks are captured on the enterprise risk register, aggregated across systems, and reported to the board. Approves risk-tier classifications and any exception that breaches risk appetite.
9. Internal Audit. Provides independent assurance over the AI governance program — both process conformance (is the policy being followed?) and control effectiveness (do the controls actually mitigate the stated risks?). Reports to the Audit Committee of the Board. Consulted during design; Responsible for audit execution.
10. Head of Compliance. Owns regulatory-obligation mapping, control-to-regulation traceability, attestations, and external reporting (EU AI Act registration, sectoral regulator filings, US state AI disclosures). Partners with DPO and GC on specific regulatory regimes.
11. Head of Product. Owns the user-facing expression of AI systems — disclosures, consent flows, feedback mechanisms, trust UX, accessibility, and customer communication during incidents. Accountable for user experience and brand risk of AI features.
12. Board AI Committee (or Audit/Risk Committee with AI remit). The board-level oversight body. Approves the AI policy, the risk appetite statement, and any AI system whose risk tier exceeds the threshold requiring board review. Receives quarterly AI portfolio and risk reports.
The 30 activities and RACI assignments {#matrix}
The matrix below organizes 30 core AI governance activities into six lifecycle phases. Each row shows the Responsible (R), Accountable (A), Consulted (C), and Informed (I) assignments. Each activity has exactly one A and usually one or two R’s. The full column list (roles) is CAIO · CISO · DPO · GC · CoE · BU · ML · CRO · IA · Comp · Prod · Board.
Strategy and policy (6 activities)
| # | Activity | R | A | C | I |
|---|---|---|---|---|---|
| 1 | AI strategy definition | CAIO, CoE | CAIO | CRO, GC, Comp, BU | Board, ML, Prod |
| 2 | AI policy approval | CoE | Board | CAIO, CRO, GC, DPO, Comp | CISO, IA, BU, ML, Prod |
| 3 | AI ethics principles | CoE, Prod | CAIO | GC, DPO, BU, external stakeholders | Board, CISO, CRO, Comp, IA, ML |
| 4 | Risk appetite statement | CRO | Board | CAIO, GC, Comp, BU | CoE, CISO, DPO, IA, ML, Prod |
| 5 | Use-case intake process | CoE | CAIO | CRO, GC, DPO, CISO, BU | Board, IA, Comp, ML, Prod |
| 6 | Board AI reporting cadence | CAIO, CoE | Board | CRO, IA, Comp | CISO, DPO, GC, BU, ML, Prod |
The board is Accountable for the AI policy and the risk appetite — those are governance instruments the board must own. The CAIO is Accountable for the strategy and reporting that operationalizes them.
Use-case gating (4 activities)
| # | Activity | R | A | C | I |
|---|---|---|---|---|---|
| 7 | Intake evaluation | CoE | CAIO | BU, DPO, GC, CISO | CRO, Comp, ML, Prod |
| 8 | Risk classification | CoE | CRO | DPO, GC, CISO, BU | CAIO, IA, Comp, ML, Prod |
| 9 | Gate 1 approval (concept) | CoE, BU | CAIO | CRO, DPO, GC, Comp | Board, CISO, IA, ML, Prod |
| 10 | Gate 2 approval (pre-build) | CoE, BU | CAIO | CISO, DPO, GC, ML, Comp | Board, CRO, IA, Prod |
The CRO owns risk classification because the tier determines enterprise risk treatment. The CAIO owns each gate — the governance body makes go/no-go decisions. For high-risk tier systems, the Board becomes Accountable for Gate 2 (see escalation thresholds).
Data and model (5 activities)
| # | Activity | R | A | C | I |
|---|---|---|---|---|---|
| 11 | Dataset approval | ML | BU | DPO, CISO, CoE, GC | CAIO, CRO, IA, Comp, Prod |
| 12 | Data residency decisions | DPO | BU | CISO, GC, Comp, CoE | CAIO, CRO, IA, ML, Prod |
| 13 | Model selection | ML, CoE | BU | CISO, CAIO, Comp | CRO, DPO, GC, IA, Prod |
| 14 | Vendor model procurement | ML, CoE | BU | CISO, GC, DPO, Comp, CRO | CAIO, IA, Prod |
| 15 | Model card approval | ML | BU | CoE, DPO, GC | CAIO, CISO, CRO, IA, Comp, Prod |
The Business Unit AI Owner is Accountable for data and model decisions because those decisions determine the risk profile of the system they own. The DPO and CISO are Consulted on every one — not optional.
Deployment (4 activities)
| # | Activity | R | A | C | I |
|---|---|---|---|---|---|
| 16 | Pre-deployment review | CoE, ML | CAIO | CISO, DPO, GC, BU, CRO, Comp | Board, IA, Prod |
| 17 | Production release | ML, BU | BU | CoE, CISO, Prod | CAIO, CRO, DPO, GC, IA, Comp |
| 18 | Rollback decision | ML, BU | BU | CAIO, CISO, CoE, Prod | CRO, DPO, GC, IA, Comp, Board |
| 19 | HITL threshold setting | BU, ML | BU | CoE, GC, Comp, Prod | CAIO, CISO, DPO, CRO, IA |
Rollback is intentionally assigned to the Business Unit AI Owner. When a production AI system is misbehaving, the business owner has to balance continuity of service against risk exposure. Central governance can require rollback via the escalation path, but the day-to-day trigger sits with the owner who carries the business consequence.
Monitoring (4 activities)
| # | Activity | R | A | C | I |
|---|---|---|---|---|---|
| 20 | Performance threshold setting | ML, CoE | BU | CAIO, CRO, Prod | CISO, DPO, GC, IA, Comp |
| 21 | Anomaly investigation | ML | BU | CoE, CISO, DPO, Prod | CAIO, CRO, GC, IA, Comp |
| 22 | Drift decision (retrain/retire) | ML, BU | BU | CoE, CAIO, Comp | CRO, CISO, DPO, GC, IA, Prod |
| 23 | Monitoring dashboard ownership | CoE, ML | CAIO | BU, CRO, IA | CISO, DPO, GC, Comp, Prod, Board |
The dashboard is a central governance asset — the CAIO owns it. Individual thresholds and investigations are per-system — the Business Unit AI Owner owns those.
Incident (3 activities)
| # | Activity | R | A | C | I |
|---|---|---|---|---|---|
| 24 | Incident triage | ML, CISO | CAIO | BU, DPO, GC, CoE, Prod | CRO, IA, Comp, Board |
| 25 | Customer communication | Prod, GC | BU | CAIO, DPO, Comp | CISO, CRO, IA, ML, Board |
| 26 | Regulatory notification | Comp, DPO | GC | CAIO, CRO, CISO, BU | IA, ML, Prod, Board |
Regulatory notification is uniquely assigned to the General Counsel as Accountable because these filings carry legal and attorney-client privilege implications that can only sit with the GC. Privacy breaches specifically may shift A to the DPO depending on jurisdiction; document this explicitly if so.
Audit and review (4 activities)
| # | Activity | R | A | C | I |
|---|---|---|---|---|---|
| 27 | Internal audit | IA | Audit Committee | CAIO, CRO, Comp, CoE | CISO, DPO, GC, BU, ML, Prod |
| 28 | Certification audit (ISO 42001) | CoE, Comp | CAIO | IA, CRO, CISO, DPO, GC, BU, ML | Board, Prod |
| 29 | Annual policy review | CoE | CAIO | All roles | Board |
| 30 | Training record review | CoE | CAIO | HR, BU, IA, Comp | CISO, DPO, GC, CRO, ML, Prod |
Internal Audit reports functionally to the Audit Committee, not to the CAIO, which is why the A lies with the Audit Committee for activity 27. This independence is required by most corporate-governance standards and by ISO 42001 clause 9.2.
Decision rights per lifecycle stage {#decision-rights}
The matrix above maps to the six COMPEL stages. Use this as a quick mental model of who decides what, at which stage of an AI system’s life.
Calibrate. The CAIO and the Board set strategy, policy, and risk appetite. The CRO classifies risk. Intake decisions start here. Primary decision authority is central governance.
Organize. The CoE Lead operationalizes the policy into standards, RACIs, training, and the model registry. The CAIO approves these as they become live. Role: build the operating system.
Model. The Business Unit AI Owner, backed by the ML Engineering Lead, makes the decisions that define the system’s risk profile: dataset, model, vendor, architecture. The CoE, DPO, and CISO are Consulted on every material call. This is where Accountability shifts from central to business-unit.
Produce. The Business Unit AI Owner releases into production. The CAIO approves pre-deployment review. The ML Engineering Lead executes the release. Product owns user-facing disclosures. This stage has the most concurrent decision-makers — use escalation thresholds aggressively.
Evaluate. The ML Engineering Lead owns monitoring. The Business Unit AI Owner owns thresholds and responses. The CAIO owns the dashboard. Anomalies and drift are Business Unit decisions unless they breach escalation thresholds.
Learn. Incidents, audits, and policy revision. The CAIO is Accountable for incidents at the program level, while the Business Unit AI Owner is Accountable for system-level customer communication. The Audit Committee is Accountable for independent audit. Findings feed the annual policy review.
Escalation thresholds {#escalation}
Decisions do not stay at the default RACI level when the stakes change. The matrix below defines when decision rights escalate.
| Trigger | Moves A from | Moves A to | Timing |
|---|---|---|---|
| Risk tier classified as High or Prohibited (EU AI Act Article 6 or internal tiering) | Business Unit AI Owner | CAIO (+ Board notification) | At Gate 1 |
| Risk tier classified as Unacceptable | CAIO | Board (go/no-go vote) | Before Gate 2 |
| Budget request exceeds $X (org-specific, often $1M per system or $5M per program) | Business Unit AI Owner | CAIO + CFO | At funding decision |
| AI system processes special-category personal data (Article 9 GDPR) | Business Unit AI Owner | DPO + Business Unit AI Owner (joint A) | At intake |
| AI system in regulated industry use (medical device, credit decision, employment) | Business Unit AI Owner | GC + Business Unit AI Owner (joint A) + sector regulator notification | At intake |
| Incident with customer harm, regulator notification, or press exposure | Business Unit AI Owner | CAIO + GC (+ Board within 24h) | On triage |
| Material model change (new base model, new training data class, new deployment region) | ML Engineering Lead | Re-trigger Gate 2 with CAIO as A | Before change |
| Risk appetite breach (aggregated across AI portfolio) | CRO | Board (risk appetite revision or portfolio rebalance) | At quarterly review |
| Certification nonconformity (ISO 42001 major nonconformity) | CoE | CAIO + Audit Committee | Within audit cycle |
| AI vendor incident affecting your deployed system | Business Unit AI Owner | CAIO + CISO + GC (joint triage) | On notification |
Escalation thresholds are not optional overlays on top of the RACI — they are part of the RACI. Document the triggers, the new Accountable role, and the required timing. Train every named role on them. Rehearse via tabletop exercises at least annually.
Template usage guidance {#template}
Do not copy this matrix verbatim. Customize for your organization using these steps.
Step 1 — Rename roles to match your org chart. If you do not have a CAIO, decide whether the function lives with the CIO, CDO, CTO, or a committee. Name the actual role. Similarly for DPO (some organizations have a Privacy Office instead), Product (some have separate Product and CX), and Board committee (Audit, Risk, Technology, or dedicated AI).
Step 2 — Add or merge activities. If your organization has distinct gates (for example, a separate Gate 0 for feasibility or a Gate 3 for post-deployment re-authorization), split the activity accordingly. If you operate in a single regulated industry, add activities for sector-specific filings (medical device submissions, credit-model validation memos). If you are small, merge strategy and policy into a single activity with the Board as A.
Step 3 — Walk each row with the named role. Do not publish a RACI by fiat. Sit with each named role and confirm: (a) they understand the activity, (b) they accept the assignment, (c) they have the resources and authority to execute it, (d) they know the escalation triggers. Any disagreement is a signal the matrix needs revision — or the role needs restructuring.
Step 4 — Version and sign. Store the RACI as a controlled document. Require sign-off by each named role. Version with each revision. Link the RACI to the AI policy (which references it) and the ISO 42001 management-system documentation (clause 5.3 requires role assignment evidence).
Step 5 — Test via tabletop. Run at least two tabletop exercises per year. Typical scenarios: a hallucination incident with customer harm, a drift event crossing a performance threshold, a regulator request for EU AI Act documentation, a vendor model deprecation. At each scenario, ask: who decides? Who is informed? What escalates? Update the matrix based on gaps found.
Step 6 — Align to measurement. Every activity should have a metric — completion rate, timeliness, review quality. If an activity cannot be measured, reconsider whether it belongs on the RACI or whether it is aspirational rather than operational.
COMPEL stage mapping {#compel-mapping}
| COMPEL stage | RACI activities (IDs) | Primary accountable roles |
|---|---|---|
| Calibrate | 1, 4, 5, 6, 7, 8 | CAIO, Board, CRO |
| Organize | 2, 3, 9, 10 | Board, CAIO |
| Model | 11, 12, 13, 14, 15 | Business Unit AI Owner |
| Produce | 16, 17, 18, 19 | CAIO, Business Unit AI Owner |
| Evaluate | 20, 21, 22, 23 | Business Unit AI Owner, CAIO |
| Learn | 24, 25, 26, 27, 28, 29, 30 | CAIO, BU, GC, Audit Committee |
The pattern is clear: central governance dominates the early stages (Calibrate, Organize). Business units dominate the middle (Model, Produce, Evaluate). Central governance, the GC, and the Audit Committee dominate the closing stage (Learn). The RACI formalizes this arc.
Evidence artifacts {#evidence}
A functioning RACI produces these artifacts. Each is required by ISO 42001 clause 5.3 and by NIST AI RMF GOVERN 2:
- The signed RACI document itself, with version history.
- Role descriptions for each of the 12 roles, including authority, resources, and success criteria.
- Training records showing each named role has completed AI governance training appropriate to their activities.
- Meeting minutes from the governance forums where the RACI is referenced (AI council, gate reviews, board AI committee).
- Escalation log — every time a decision was escalated, which trigger fired, who became accountable, and the outcome.
- Annual review minutes showing the RACI was reviewed and updated.
- Tabletop exercise reports showing the RACI was tested.
- Exception log — any deviation from the RACI, with reason and approval.
Retain these for the life of the AI program plus regulatory retention periods — typically ten years for EU AI Act high-risk systems and seven years for most financial-services regulators.
Metrics {#metrics}
Track the RACI’s effectiveness with these measures:
- RACI coverage — percentage of in-scope AI activities that have a named R and A. Target 100%.
- Role coverage — percentage of named roles that have a current, signed acceptance. Target 100%.
- Escalation latency — median time from trigger to new Accountable role being engaged. Target under 24 hours for incidents; under 5 business days for risk-appetite breaches.
- Gate throughput — number of AI systems passing each gate per quarter, split by time-in-gate. Rising time-in-gate signals bottleneck at a named role.
- Decision traceability — percentage of gate decisions with documented Consulted-role input and Accountable-role sign-off. Target 100%.
- Tabletop findings closure rate — percentage of tabletop-identified gaps closed within agreed timeframe.
- Training compliance — percentage of named roles current on AI governance training. Target 100%.
- Audit findings on role clarity — number of internal or certification audit findings citing unclear roles. Target zero.
Risks if skipped {#risks}
Organizations that skip the RACI step routinely encounter:
- Accountability gaps. An incident occurs, and no one is clearly accountable. The investigation stalls while roles are debated. Regulators and customers interpret the delay as obfuscation.
- Duplicated decision rights. Two roles believe they have the final say (for example, the CIO and the CAIO, or the DPO and the GC). Decisions loop or are made twice with different outcomes.
- Hidden single points of failure. The ML Engineering Lead is the de facto R on everything, including decisions they should not make. When that person leaves, the program stalls.
- Audit findings. ISO 42001 clause 5.3 and NIST AI RMF GOVERN 2.1 both require documented, communicated, enforced role assignments. A missing or stale RACI is a standard audit finding.
- Regulatory exposure. The EU AI Act Article 17 requires named accountable persons in the quality management system. A vague RACI creates personal legal exposure for senior executives and enterprise exposure under the Act.
- Slow escalation. Without thresholds, an incident that should escalate to the board in 24 hours instead loops through email threads for a week. The organization loses both the window to act and the trust of customers.
- Board dissatisfaction. The board cannot discharge its oversight duty if it does not know which decisions it owns. An AI RACI that names Board-accountable activities clarifies the board’s agenda.
References {#references}
- ISO/IEC 42001:2023 — Information technology — Artificial intelligence — Management system — iso.org/standard/81230.html. Clause 5.3 (Roles, responsibilities, and authorities) and Annex A.3.2.
- NIST AI Risk Management Framework 1.0 — nist.gov/itl/ai-risk-management-framework. GOVERN 2 (Roles, responsibilities, and communications).
- EU AI Act (Regulation 2024/1689) — eur-lex.europa.eu. Article 17 (Quality management system) and Article 26 (obligations of deployers).
- OECD AI Principles — oecd.org/going-digital/ai/principles. Accountability principle as the basis for named-role governance.
- COBIT 2019 — Responsibility Assignment Matrix guidance — ISACA. General RACI construction methodology, adapted here for AI specifics.
- Singapore Model AI Governance Framework (PDPC) — pdpc.gov.sg. Section on internal governance structures.
Related COMPEL articles
- The COMPEL Operating Model: Roles and Decision Rights
- The AI Center of Excellence: Structure, Charter, and Operating Model
- AI Operating Model Blueprint: From Strategy to Executable Structure
How to cite
COMPEL FlowRidge Team. (2026). “AI Governance RACI Matrix for Enterprises: Decision Rights Across 30 Activities and 12 Roles.” COMPEL Framework by FlowRidge. https://www.compelframework.org/articles/seo-b1-ai-governance-raci-matrix-for-enterprises/