Skip to main content
AITM M1.4-Art05 v1.0 Reviewed 2026-04-06 Open Access
M1.4 AI Technology Foundations for Transformation
AITF · Foundations

Decision Rights, Accountability, and Separation of Duties

Decision Rights, Accountability, and Separation of Duties — Technology Architecture & Infrastructure — Applied depth — COMPEL Body of Knowledge.

12 min read Article 5 of 14

COMPEL Specialization — AITM-OMR: AI Operating Model Associate Article 5 of 10


Operating-model design is decision-rights design. The archetype, the capability map, and the CoE produce structure; the decision-rights architecture determines whether the structure will actually function. The regulator who later audits the organization’s AI systems will ask, for a specific decision, who made it, under what authority, with what evidence, and who could have blocked it. A specialist who cannot answer has shipped an operating model with a compliance fracture. This article walks the learner through three named decision rights frameworks, the separation-of-duties requirement that anchors to the NIST AI Risk Management Framework’s GOVERN function, and the documentation discipline that lets the design survive the inevitable staff turnover.

Three frameworks, three emphases

Decision-rights design is a well-established discipline inherited from broader organizational theory. Three named frameworks cover almost every practical design. They are not competitors. They are three instruments with different emphases, and a mature operating-model designer uses the one that fits the decision being designed.

RACI / RAPID / DACI names the three. RACI — responsible, accountable, consulted, informed — emerged from project-management practice and is by far the most widely used in enterprise contexts. Its strength is clarity about the four named roles; its weakness is that “accountable” and “responsible” are routinely confused and often merged in practice, and the framework offers no explicit mechanism for resolving disputes between the two.

RAPID — recommend, agree, perform, input, decide — was developed by Bain and Company and published in Paul Rogers and Marcia Blenko’s 2006 Harvard Business Review article on decision-making.1 Its strength is that it distinguishes the recommender (who formulates the proposal) from the agreers (whose sign-off is required) from the decider (who makes the call) — three roles that RACI merges into “accountable”. RAPID is the instrument of choice for complex decisions with multiple stakeholders and regulatory implications, where the recommender-agreer-decider separation matters.

DACI — driver, approver, contributor, informed — was developed by Atlassian and popularized in product-engineering contexts. Its strength is the clarity of the driver role, which names who will actually move the decision to completion. DACI fits product and engineering decisions where progress requires a single accountable driver.

The specialist does not need to pick one of the three as the organization’s standard. A mature operating model uses RAPID for high-risk, multi-stakeholder AI decisions (model risk acceptance, regulatory notification, agentic-system kill-switch triggers), DACI for product and engineering decisions (platform roadmap, feature prioritization), and RACI for operational and routine decisions where clarity about consulted and informed parties matters more than separating recommender from decider.

The NIST AI RMF GOVERN anchor

The decision-rights design for AI work is not free-form. The NIST AI Risk Management Framework 1.0 GOVERN function, published in January 2023, specifies the organizational accountability structure expected of any mature AI management practice.2 GOVERN Subcategory 2.1 requires that “the organization’s personnel and partners receive AI risk management training to enable them to perform their duties and responsibilities consistent with related policies, procedures, and agreements”. GOVERN Subcategory 2.2 requires that “responsibilities are clearly documented” and that roles and responsibilities assigned to third parties are captured in formal contracts. The subcategories continue through the assignment of accountable owners for AI risks, controls, and outcomes.

A specialist designing decision rights for an AI operating model anchors the work to GOVERN. The accountability matrix — the artifact that maps named accountable owners to the AI systems, risks, controls, and outcomes in the organization — is the concrete product of the design. Without a published accountability matrix, the organization cannot demonstrate to a regulator that GOVERN has been implemented. The EU AI Act’s Article 26 deployer obligations layer on additional internal-governance duties for deployers of high-risk AI systems, making the documented accountability matrix a regulatory artifact rather than an optional best practice.

Separation of duties

The hardest decision-rights discipline to design for, and the one most often absent from first-pass designs, is separation of duties. In regulated contexts, the principle requires that the same individual (or same role) cannot simultaneously build, operate, govern, and sign off on an AI system. The separation is not punitive; it exists because consolidated authority in any one role reliably produces the conflicts of interest that cause downstream failures.

A workable separation typically defines four domains. The builder domain covers the team that designs, trains, and validates the model — data scientists, ML engineers, and their leadership. The operator domain covers the team that runs the model in production — MLOps engineers, platform operators, site reliability staff. The risk owner domain covers the independent risk function that classifies the system, sets control requirements, and monitors compliance with them. The sign-off authority covers the named business or governance executive who authorizes production deployment and who holds residual accountability for outcomes.

The four domains cannot all sit in the same reporting line or be populated by the same set of people wearing different hats on different Wednesdays. A specialist who designs a separation that collapses under the organization’s real staffing has designed on paper. The corrective is to walk the proposed separation through three or four realistic scenarios — a model that misbehaves in production, a regulator inquiry about a specific decision, a staff departure that leaves a key role vacant — and check whether the separation holds under each stress. Separation that holds under stress is the artifact the organization actually has; separation that needs the stars to align is aspirational.

The Boeing 737 MAX MCAS case from 2019-2020, documented in the US House Committee on Transportation’s September 2020 report, is the canonical teaching anchor for separation-of-duties failure.3 The report’s core finding, repeated across its 245 pages, was that Boeing had collapsed the builder, operator, and sign-off domains in ways that eliminated the checks the system was supposed to carry. The result was two fatal crashes, the 737 MAX grounding, and a regulatory-approval overhaul. The case is not an AI case, but the structural failure it documents — compressed decision authority in the same organizational stack — is the separation-of-duties failure AI operating-model designers must prevent. The lesson travels.

[DIAGRAM: OrganizationalMappingBridge — decision-rights-four-domains — four horizontal lanes labelled “Build”, “Operate”, “Govern/Risk”, “Sign-off”; within each lane boxes show the named roles with reporting lines; lines running vertically across the lanes show a specific decision (e.g., model promotion to production) flowing through all four domains with explicit role names at each hand-off; primitive makes the four-domain separation and its hand-off points visible in a single view]

Decision-rights design by tier

Not every AI decision needs the full four-domain separation. An operating model that treats low-risk decisions with the same decision-rights overhead as high-risk ones becomes unworkable and creates incentives to route around its own governance. The specialist designs decision-rights by risk tier.

The standard approach pairs the risk tier from Article 6 of the EU AI Act (unacceptable, high, limited, minimal) with internal risk-tier classification from the organization’s risk-management function. For each tier the operating model specifies the required decision-rights framework. High-risk decisions — systems in the EU AI Act Annex III scope, systems with material business impact, systems involving personal data with consequential decisions — require RAPID-style decision-rights with full separation of duties and a named independent risk-owner role. Medium-risk decisions — internal productivity systems, systems affecting employees rather than customers or regulated subjects — may use lighter RACI with documented sign-off. Low-risk decisions — experimentation environments, internal research projects, shadow deployments with no production reach — may use the lightest documented authority, with risk escalation triggers if the experiment shows potential production relevance.

The tiering is not a loophole. Systems cannot be classified low-risk to avoid the overhead of high-risk decision-rights. The classification is audited and the decision to classify is itself a decision-rights moment. A specialist who designs the decision-rights framework without designing the classification decision has left the most important door unlocked.

[DIAGRAM: Matrix — decision-rights-by-tier — 2x2 with vertical axis “Decision reversibility (low to high)” and horizontal axis “Risk tier (low to high)”; quadrants suggest RACI (low risk, high reversibility), DACI (low risk, low reversibility), RAPID with advisory (high risk, high reversibility), full four-domain separation (high risk, low reversibility); primitive shows that decision-rights framework choice depends on both risk and reversibility, not risk alone]

Documenting for turnover

Decision-rights designs that live in one person’s head do not survive. A design that the specialist carries as expert knowledge will not function the first time the specialist leaves. The documentation discipline is what makes the design durable.

Three artifacts compose the documentation package. The accountability matrix maps every AI system in the inventory, every named risk, every named control, and every named outcome to a single accountable owner by name and by role. The matrix is updated when systems change, when roles change, and on a defined review cadence (typically quarterly). The decision register captures high-risk decisions as they are made, with the recommender, agreers, decider, evidence, and rationale recorded. The register supports the regulator-inquiry scenario by producing retrospective auditability. The role descriptions for each decision-rights role (model-risk owner, architecture review chair, deployment sign-off authority) are maintained as part of the organization’s HR record so that the role survives the person.

The documentation is not busywork. In every regulatory-enforcement case involving AI systems that has reached public visibility through 2024 — including the Italian Garante’s ChatGPT investigation and multiple FTC settlements — the ability to produce retrospective documentation of who decided what, when, and on what basis has been central to the outcome. Organizations with the documentation in place have been able to defend their decisions on the merits; organizations without have negotiated from a position of weakness.

The shadow decision-rights problem

A subtle failure mode deserves specific attention. Formal decision-rights designs rarely fail catastrophically; they are usually well-documented, compliant, and thorough on paper. They fail because shadow decision-rights emerge alongside them — informal structures where real decisions are made outside the documented framework. A regulator who later audits the organization’s AI work finds a gap between the formal decision-rights documentation (which describes the intended process) and the actual decision trail (which shows decisions being made informally by different people in different ways).

Shadow decision-rights appear for predictable reasons. Formal frameworks are slower than informal ones; a team facing a delivery deadline routes around the formal process. The formal process requires roles that do not exist in practice (the risk owner was named but the role was never staffed). The formal process uses vocabulary the organization does not actually use, and people default to the vocabulary they already have. In each case the shadow is not malicious; it is the practical response to a formal process that does not fit the organization’s real operating rhythm.

The specialist’s corrective is to design the formal framework to fit the organization’s real rhythm, not to fight it. A formal process that takes two weeks to approve a routine model change will produce shadow decisions; a formal process that takes two days will not. A framework that names roles the organization cannot actually staff produces shadows; a framework that names staffable roles does not. A framework that uses the organization’s existing governance vocabulary is adopted; a framework that introduces new vocabulary is routed around. The specialist who designs for real adoption rather than for documentation elegance produces frameworks that are followed; the specialist who designs for elegance produces shadows.

Decision-rights for agentic systems

Agentic AI systems present a specific decision-rights challenge that warrants explicit discussion. An agentic system is one that takes actions in the world on behalf of users — executing transactions, calling APIs, generating artifacts, interacting with external services — with varying degrees of autonomy. The decision-rights framework that fits a traditional predictive model (scoring, classification, recommendation) does not fit an agentic system without adjustment.

Three adjustments are common. First, the decision-rights framework must name the level of autonomy at which the agent operates, and must be tier-gated to autonomy level. An agent that proposes actions for human approval operates under a different decision-rights framework than one that takes actions with post-hoc review, which operates under a different framework than one that takes actions with no human review at all. The COMPEL autonomy classification from the AITF curriculum provides a working tiering. Second, the framework must include named kill-switch authority — specific roles empowered to stop the agent’s operation immediately, with defined triggers (incident thresholds, regulatory events, unexpected behaviour patterns). The kill-switch is not a failsafe; it is a decision-rights structure that makes the failsafe usable. Third, the framework must include audit-trail requirements that survive the agent’s operation, because agentic-system decisions unfold over time and produce lengthy traces that retrospective review requires.

The agentic-system decision-rights discipline will deepen as agentic deployments mature. The specialist working on an operating model that includes significant agentic capability should explicitly name the agentic decision-rights in the framework rather than absorbing it into the general decision-rights design.

Summary

Decision-rights design is the connective tissue of the operating model. RACI, RAPID, and DACI are three equivalent instruments with different emphases; mature designs use each where it fits. The separation of duties between builder, operator, risk owner, and sign-off authority anchors to the NIST AI RMF GOVERN function and to the EU AI Act’s deployer obligations. Decision-rights frameworks are tiered by risk and by reversibility. Documentation — accountability matrix, decision register, role descriptions — is what makes the design survive turnover and scrutiny. Article 6 moves from decision rights to the funding dimension that decision rights are most often distorted by — the funding model that shapes incentives across the whole operating model.


Cross-references to the COMPEL Core Stream:

  • EATF-Level-1/M1.2-Art15-The-COMPEL-Operating-Model-Roles-and-Decision-Rights.md — Core Stream primary article on COMPEL operating-model roles and decision rights

Q-RUBRIC self-score: 91/100

© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.

Footnotes

  1. Rogers, P. and Blenko, M., “Who Has the D?: How Clear Decision Roles Enhance Organizational Performance”, Harvard Business Review (January 2006), https://hbr.org/2006/01/who-has-the-d-how-clear-decision-roles-enhance-organizational-performance (accessed 2026-04-19).

  2. National Institute of Standards and Technology, “Artificial Intelligence Risk Management Framework (AI RMF 1.0)”, NIST AI 100-1 (January 2023), https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf (accessed 2026-04-19).

  3. US House Committee on Transportation and Infrastructure, “The Design, Development and Certification of the Boeing 737 MAX”, Final Committee Report (September 2020), https://transportation.house.gov/imo/media/doc/2020.09.15%20FINAL%20737%20MAX%20Report%20for%20Public%20Release.pdf (accessed 2026-04-19).