Skip to main content
AITM M1.6-Art03 v1.0 Reviewed 2026-04-06 Open Access
M1.6 People, Change, and Organizational Readiness
AITF · Foundations

Autonomy Classification

Autonomy Classification — Organizational Change & Culture — Applied depth — COMPEL Body of Knowledge.

10 min read Article 3 of 18

COMPEL Specialization — AITM-AAG: Agentic AI Governance Associate Article 3 of 14


Definition. The autonomy classification of an agent is the formal placement of that agent on a defined spectrum, together with the supervision and approval controls that apply at that level. Classification is a governance artifact, not a technical specification. It binds the engineering facts about the agent (tools, memory, control loop, human-in-the-loop cadence) to the governance decisions the organisation has made about what latitude the agent is permitted in production.

A classified agent can be governed. An unclassified agent is, in practice, governed by whoever built it and whatever they thought was reasonable on the day they deployed. The classification is what allows a compliance officer, an internal auditor, or a regulator to read an agent’s operating profile off a single line in the register.

The Level 0–5 rubric

Article 1 of this credential sketched a six-level spectrum. This article expands it into a rubric a specialist can use to classify any agent. The six levels are named by the supervisory regime they imply, not by capability claims about the underlying model. A GPT-class or Claude-class or Llama-class model can run at any of Levels 0 through 4 depending on how it is deployed; the model does not determine the level.

LevelNameHuman in the loopExampleTypical controls
0AssistedEvery turnSingle-turn chat assistantPrompt/response log
1AdvisorEvery consequential outputMulti-turn chat, recommendation onlyAdvisory-only labelling, no action tools
2Bounded executorApproval per actionCode assistant with test runnerPer-action approval, tool allow-list, sandbox
3Supervised executorApproval per session or planResearch assistant with multi-step web searchPlan approval, step log, post-session review
4Autonomous executorGuardrail-definedBack-office workflow agent running overnightBudget + step caps, hard kill-switch, dashboard
5Self-directingExceptionalResearch frontier, not enterprise-readyCapability-level controls, deliberate isolation

Level 5 is outside normal enterprise production. Any classification that places a commercial deployment at Level 5 should be challenged by the Methodology Lead. Operator-side frameworks including the Anthropic Responsible Scaling Policy (2024) and the OpenAI Preparedness Framework (December 2023, updated 2024) treat capability levels that approach Level 5 as requiring deliberate, pre-announced safety gates — the governance analyst treats a Level 5 assignment the same way. Sources: https://www.anthropic.com/responsible-scaling-policy ; https://openai.com/safety/preparedness-framework/.

Classification criteria — how to place an agent on the rubric

The rubric looks simple from the table above and is not, in practice, because the level a system actually operates at often differs from the level its designers intended. Four criteria govern the placement.

Criterion 1 — human-in-the-loop cadence

What is the longest stretch of execution the agent performs without a human approval or review gate? A chat assistant is at most Level 1 because every response is returned to a human. A research agent that runs for ten minutes unattended before producing a draft is Level 3 or Level 4 depending on whether a plan was pre-approved. An overnight workflow agent that completes a 200-step sequence with no human intervention is Level 4.

The cadence is measured in actions, not wall-clock time. A five-second agent that calls twenty tools in sequence is further along the spectrum than a thirty-second agent that calls one tool.

Criterion 2 — reversibility of actions

An agent that only reads data is classified lower than an agent of the same cadence that writes data. An agent that writes reversibly (e.g., drafts a document for review) is classified lower than one that writes irreversibly (e.g., sends an email, executes a trade, commits to a purchase). The reversibility criterion is the reason a read-only RAG chatbot rarely exceeds Level 1 regardless of architectural complexity.

Criterion 3 — scope of tool surface

An agent with access to three narrow tools operates at a lower effective autonomy than an agent with access to thirty or to a general-purpose shell. Browser-use agents — Anthropic’s Computer Use (October 2024 release) and OpenAI’s Operator (January 2025 release) are named public examples, both documented by their vendors — have a general-purpose tool surface (the whole web, reached through a browser). They start at Level 3 and often reach Level 4 depending on how they are supervised. A bounded-tool agent with the same underlying model may sit at Level 2.

Criterion 4 — consequence severity

An agent whose worst-case failure is a misformatted document is classified lower than an agent whose worst-case failure is a financial or safety consequence. A customer-service chatbot in retail sits lower than a customer-service chatbot in banking even if their technical architectures are identical. The Moffatt v. Air Canada decision (2024) — covered in depth in Article 4 of this credential — is the emblematic precedent for why consequence severity matters: a chatbot at what the airline presumably classified as Level 1 or 2 was found by the British Columbia Civil Resolution Tribunal to have committed the airline to a policy the airline disputed, and the tribunal rejected the argument that the chatbot was a separate legal entity. Source: https://decisions.civilresolutionbc.ca/crt/sc/en/item/525448/index.do.

The classification artifact

The classification for each agent is recorded in a single row. The row is short by design so that the register can be scanned at a glance.

FieldExample
Agent IDfinance-research-agent-v3
Level3 — Supervised executor
Architectural patternPlan-act-observe
Primary toolsWeb search, EDGAR reader, internal CRM read, draft-note write
Memory scopeSession only, flushed at end of task
Human-in-the-loopPlan approval required before execution
Kill-switchCoordinator revoke-token, verified weekly
OwnerNamed individual
Review cadenceQuarterly
Last reclassification2026-01-15, triggered by tool addition

The organisation may add columns. The fields above are the minimum. Any agent without every field populated fails the readiness check for production.

Reclassification triggers

A classification is stale the moment the agent’s configuration or behaviour changes in ways the level did not anticipate. Seven triggers should be standing rules:

  1. Model change. The engineering team swaps the underlying model (e.g., from an Anthropic to an OpenAI model, or from a managed API to a self-hosted Llama 3 or Mistral model). Reclassification confirms behaviour is equivalent.
  2. Tool addition or removal. Any change to the tool registry triggers review.
  3. Memory-scope change. From ephemeral to persistent, or from per-user to shared, triggers review.
  4. Human-in-the-loop cadence change. Moving from per-action to per-session approval is almost always an autonomy-level increase.
  5. Environment change. Promoting an agent from staging to production; from one region to another; from a sandbox tenant to a production tenant.
  6. Incident. Any agentic incident triggers reclassification as part of the post-incident review.
  7. Regulatory change. Publication of new EU AI Office guidance, a NIST AI RMF update, or a relevant national law change that alters the control expectations.

Triggered reclassifications are logged. The inventory’s change history is itself a governance artifact.

Drift risk — when the lived autonomy diverges from the classified autonomy

The most insidious failure of autonomy classification is drift: the classification says Level 2 but the agent’s actual operation has moved to Level 3 because engineers quietly extended its tool surface, extended its session length, or reduced the human-in-the-loop cadence without triggering a formal reclassification. Drift is why review cadence exists.

The drift-risk indicators to monitor:

  • Step budget utilisation. If an agent routinely runs near its step cap, the cap is probably masking that it has moved up the spectrum.
  • Approval override rate. If human reviewers are rubber-stamping plans (high approval rate with low modification rate), the effective supervisory regime has degraded, even if the nominal regime is unchanged.
  • Tool-use breadth. A steady expansion of which tools are actually being called, even within a stable registry, is a behaviour drift.
  • Memory growth. Persistent stores that grow without a retention cycle are a silent expansion of state scope.

Observability (Article 10) is what makes drift detection possible.

Two real-world anchors for classification

Anthropic ASL framework — public autonomy-classification analogue

Anthropic’s Responsible Scaling Policy, refreshed in 2024, defines AI Safety Levels (ASL) tied to capability thresholds and required safety measures. The policy is not identical to enterprise agent-governance classification, but the underlying logic is the same: place systems on a defined spectrum, and attach named controls to each position on the spectrum. The governance analyst studies ASL as a model of how to write the spectrum-to-control binding in a form that survives external scrutiny. Source: https://www.anthropic.com/responsible-scaling-policy.

OpenAI Preparedness Framework — parallel operator-side example

OpenAI’s Preparedness Framework, first published in December 2023 and updated in 2024, defines preparedness levels for frontier capabilities. The two frameworks (Anthropic’s ASL and OpenAI’s Preparedness) are independent but structurally similar. Google DeepMind’s Frontier Safety Framework (2024) is a third parallel. The governance analyst should be aware that multiple operators maintain such frameworks, cites each by name when contextualising agent classification, and does not privilege any one as definitive. Source: https://openai.com/safety/preparedness-framework/.

Classification, not capability claim

The most common mistake in early classification work is conflating autonomy level with capability claim. A Level 4 classification does not mean the agent is more capable than a Level 2 agent; it means the agent is operated with less supervisory friction. A Level 2 agent on an advanced model can be deliberately restricted; a Level 4 agent on a simpler model may have been carefully engineered to justify its reduced supervision. The rubric is about operating regime, not about model horsepower. Models appear once in the classification — as a note — because the supervisory regime is what the organisation actually controls and what an external auditor will actually review.

Learning outcomes — confirm

A specialist who completes this article should be able to:

  • Recite the six-level autonomy rubric and name the control bundle associated with each level.
  • Apply the four classification criteria to place three described agents on the rubric.
  • Design a reclassification trigger policy for a described agent portfolio.
  • Evaluate a classification record for drift risk and name the indicators to monitor.

Cross-references

  • EATF-Level-1/M1.2-Art20-Agent-Autonomy-Classification.md — Core article on agent autonomy classification.
  • Article 1 of this credential — what agentic AI is.
  • Article 2 of this credential — architecture patterns and inventory.
  • Article 10 of this credential — agent observability.

Diagrams

  • ConcentricRingsDiagram — autonomy levels as concentric rings with supervision controls per ring.
  • StageGateFlow — classification lifecycle: initial → quarterly review → triggered reclassification → archive.

Quality rubric — self-assessment

DimensionSelf-score (of 10)
Technical accuracy (level-control mapping traceable to public frameworks)9
Technology neutrality (Anthropic ASL, OpenAI Preparedness, DeepMind FSF all named; Llama, Mistral called out)10
Real-world examples ≥2, public sources10
AI-fingerprint patterns9
Cross-reference fidelity10
Word count (target 2,500 ± 10%)10
Weighted total91 / 100