COMPEL Specialization — AITM-AAG: Agentic AI Governance Associate Article 4 of 14
Definition. Delegation in agentic governance is the assignment of authority from a human principal to an agent, directly or via intermediary humans, so that the agent may act on behalf of the principal within specified scope. An authority chain is the traceable lineage from the organisational body that holds a decision right (a board, executive, or role) through any delegating humans to the agent that executes the action. The chain is what allows an organisation, after an agent acts, to answer three questions: who authorised this action?, was the action within the delegated scope?, and who is accountable if the action caused harm?
Delegation is not a metaphor. Principal-agent analysis has a century of legal and organisational literature behind it, and the transfer of those concepts to AI agents is not free of friction. Courts and tribunals have already ruled on who is responsible when a chatbot commits a company. The specialist’s job is to construct authority chains that survive that scrutiny in advance rather than to improvise them during litigation.
Why authority chains matter
An enterprise agent that sends an email, places an order, books a refund, or commits the company to a position is not acting on behalf of itself. It is acting on behalf of whichever human or organisational authority put it there. When the action is legitimate, no one asks. When the action is wrong, a chain of questions follows: who deployed this agent, under what authority, with what scope, with what review, and what did they know about its limits? Governance prepares the answers to those questions at design time, not during discovery.
Authority chains also discipline deployment decisions. An engineering team that cannot name the authority under which a new agent is going into production is being asked, in effect, to invent the authority. That invitation is the root of most governance failure patterns in AI.
The chain’s components
A complete authority chain has five components. Each appears in the governance pack.
- Organisational decision right. The body that holds the original right to act in the relevant domain. For a refund, that is typically a finance function; for a hire, a talent function; for a production write, a data-custodian role. The right usually predates AI and is documented in the organisation’s policy register.
- Delegating principal. The human role to which the decision right has been delegated in the normal organisational hierarchy. The delegation may be implicit (a customer-service team lead can approve refunds up to X) or explicit (a policy names the role).
- Deployment authority. The decision to place an agent into the delegation chain is itself an exercise of authority and must be named. It is frequently the same role as the delegating principal but need not be.
- Agent identity. The agent has a stable identity — a name, a version, a deployment environment — so that the action can be traced to a specific configuration. A single “agent” at the level of organisational vocabulary is often three or four distinct agent identities at the level of code.
- Action scope. The scope within which the agent is authorised to act. The scope is bounded by tools (Article 6), by memory (Article 7), by autonomy level (Article 3), by context (the kinds of inputs for which the agent is authorised), and by consequence cap (a monetary or severity ceiling).
The five components together produce a named sentence: The [agent identity] is authorised, by delegation from [delegating principal] exercising [organisational decision right] via [deployment authority], to act within [action scope]. If the sentence cannot be written, the agent should not be in production.
Where delegation breaks down
Four recurrent failure patterns account for most of the real-world incidents.
Pattern 1 — delegation to an unauthorised role
The team that builds the agent is not the team that holds the underlying decision right, and the decision right was never delegated to them. An engineering team builds a refund bot because it seemed like a useful automation, but finance policy never authorised anyone on that team to issue refunds. The agent issues a refund and a finance auditor asks, on whose authority? The honest answer is: no one’s. The chain is broken at the top.
Pattern 2 — scope exceeded at runtime
The agent is authorised to act within a scope, but at runtime it exceeds the scope because the scope was not technically enforced. The classic case is the agent authorised to “draft messages” that in fact sends them because the tool-use layer wasn’t scoped. Scope enforcement is a technical control (Article 6) and a documentation control; the two together fail when either is absent.
Pattern 3 — orphan agents
An agent is running, but no one currently in role is its owner. The engineer who built it has left; the team that commissioned it was reorganised; the service that invokes it is maintained by a different team that treats it as a black box. Orphan agents always eventually cause incidents and are always hard to shut down in a hurry because the kill-switch owner is no one.
Pattern 4 — multi-agent authority confusion
In a multi-agent system, each agent has its own authority chain. Authority does not transfer transitively: agent A delegating to agent B does not inherit its authority unless the delegation is explicit. Without explicit per-agent chains, the post-incident question “on whose authority did agent B do this?” yields no answer. Article 8 of this credential addresses multi-agent authority in detail.
The principal-agent framing
The principal-agent literature from organisational economics and agency law provides vocabulary for the governance analyst. A principal delegates to an agent. The agent acts on behalf of the principal but with a potentially different information set and potentially different incentives. The agency problem is the risk that the agent’s actions will diverge from the principal’s intent.
Applied to AI agents, the agency problem acquires an extra dimension. Traditional (human) agents have their own incentives; AI agents do not, but they have optimisation objectives that the principal may not have understood or specified completely. The practical effect is similar: the agent’s actions diverge from principal intent. Goal mis-specification and reward hacking (Article 9) are the AI-specific mechanisms of agency-problem failure.
The governance analyst uses principal-agent framing deliberately in documentation. A policy that says “the refund agent is delegated by the customer-service manager to issue refunds up to €500 per ticket, subject to manager override within seven days” is legible to lawyers, auditors, and external regulators in a way that a technical specification is not.
Legal implications — lessons from Moffatt v. Air Canada
The Moffatt v. Air Canada decision (2024 BCCRT 149) is the clearest public ruling to date on the legal status of a deployed AI agent. A grieving traveller used Air Canada’s chatbot on the airline’s website, which told him he could apply for a bereavement fare after the travel date. The airline’s actual policy did not permit retroactive application. Air Canada argued that the chatbot was “a separate legal entity that is responsible for its own actions.” The British Columbia Civil Resolution Tribunal rejected that argument directly. The tribunal held that Air Canada is responsible for all information on its website — including information provided by its chatbot — and ordered the airline to compensate the traveller. Source: https://decisions.civilresolutionbc.ca/crt/sc/en/item/525448/index.do.
The case yields several governance lessons. Agents are not distinct legal persons; the deployer owns their outputs. The fact that a tribunal faced the question at all indicates that the deployer-owner reasoning was not obvious to the deployer in advance. The information asymmetry that led to the incident — the chatbot told the customer something different from the policy — is a governance failure at the design stage: there was no mechanism to ensure the agent’s knowledge matched the published policy. The incident is agentic even though the agent in question was a relatively simple question-answering bot; the principle generalises to every more-autonomous agent the organisation deploys.
A second, lighter-weight example is the Chevrolet of Watsonville dealership chatbot incident of December 2023, in which a customer induced the dealership’s website chatbot to commit to selling a Chevrolet Tahoe for one dollar. The dealership declined to honour the commitment; the public discussion that followed centred on whether the commitment was legally enforceable. The incident documented the risk; it did not settle the legal question. Source: https://www.businessinsider.com/car-dealership-chatgpt-goes-rogue-2023-12.
Both cases share a pattern: the agent’s scope was implicit rather than explicit, the consequence of scope-exceeded action was material, and the deployer inherited both the technical failure and the legal exposure. Authority-chain documentation, action-scope enforcement (Article 6), and published human-oversight regimes (Article 5) are the preventive controls.
Documenting the chain
A single-page authority-chain memo per agent is the artifact. The memo should include:
- The five chain components in sentence form.
- The tools the agent may call, the parameter constraints on each, and any consequence caps.
- The human roles at each delegation step, by role name (not by individual — individuals change; roles persist).
- The reversibility status of each possible action.
- The escalation path if the agent’s output falls outside scope or if human oversight detects a deviation.
- The review cadence for the chain itself.
The memo feeds Article 5 (oversight design) and Article 14 (the governance pack).
Learning outcomes — confirm
A specialist who completes this article should be able to:
- Write a complete authority-chain sentence for a described agent, identifying the five components.
- Diagnose which of the four breakdown patterns applies to a described incident.
- Reason about legal implications of agent action, using Moffatt v. Air Canada as the anchor precedent.
- Produce a one-page authority-chain memo for a described agent.
Cross-references
EATE-Level-3/M3.4-Art11-Agentic-AI-Governance-Architecture-Delegation-Authority-and-Accountability.md— expert-level delegation and accountability architecture.- Article 3 of this credential — autonomy classification.
- Article 5 of this credential — human oversight design.
- Article 6 of this credential — tool-use governance and scope enforcement.
Diagrams
- OrganizationalMappingBridge — authority chain from board → executive → operational manager → agent, with decision-rights markers at each node.
- MatrixDiagram — 2×2 of decision reversibility × consequence severity mapping authority-chain depth required.
Quality rubric — self-assessment
| Dimension | Self-score (of 10) |
|---|---|
| Technical accuracy (principal-agent framing, chain components) | 9 |
| Technology neutrality (no vendor privileged; examples drawn from enterprises using diverse stacks) | 10 |
| Real-world examples ≥2 (Moffatt, Chevrolet) | 10 |
| AI-fingerprint patterns | 9 |
| Cross-reference fidelity | 10 |
| Word count (target 2,500 ± 10%) | 10 |
| Weighted total | 91 / 100 |