COMPEL Specialization — AITM-AAG: Agentic AI Governance Associate Article 12 of 14
Definition. Regulatory obligations for agentic AI systems are the subset of applicable law and recognised standards that impose specific, enforceable expectations on the design, operation, oversight, and reporting of agentic deployments. The three frameworks carrying most of the weight in 2026 are the EU AI Act (Regulation (EU) 2024/1689), the NIST AI RMF (NIST AI 100-1 and the AI 600-1 Generative AI Profile), and ISO/IEC 42001:2023. Each is covered in the AITB-RCM credential (EU AI Act) and Core-stream governance articles in greater depth; this article focuses specifically on how each applies to agentic systems.
Why the existing regulation applies to agents
A common misreading in early agentic discussions was that the regulation “did not yet” address agentic systems. The reading was wrong. The EU AI Act Articles 6 and 26 already apply to agentic systems that meet the high-risk criteria, Article 14 already governs their oversight, Article 15 already governs their accuracy and robustness, and Articles 50 and 52 already govern the transparency and manipulation protections for users interacting with them. The NIST AI RMF was designed to apply across AI modalities and applies to agents; ISO 42001’s management-system approach applies to agent operations the same as to any other AI operations.
What is new is the application of existing obligations to agentic-specific failure modes. The specialist’s job is to translate each general obligation into a specific control that addresses an agentic failure mode catalogued in Article 9 of this credential.
EU AI Act — obligations relevant to agents
Article 6 — high-risk classification
Article 6(2) points to Annex III, the list of use cases that place a system in the high-risk class unless a derogation under Article 6(3) applies. Agentic systems in Annex III categories — employment, access to essential services, law enforcement, administration of justice, migration, critical infrastructure — are high-risk. Many agents fall outside Annex III and are therefore not high-risk under the Act, but the governance posture for non-high-risk agents is not “nothing required”; Articles 50 and 52 still apply, and voluntary codes of conduct and good-practice benchmarks still matter.
The AITB-RCM credential teaches Article 6 classification in depth. The specialist on the agentic-governance side needs enough classification literacy to spot when an agent moves into high-risk territory — for example, when a customer-service agent starts being used in an employment context.
Article 14 — human oversight
Covered in Article 5 of this credential. For agentic systems the design burden is high because of their autonomy and speed of execution.
Article 15 — accuracy, robustness, cybersecurity
Article 15 requires that high-risk systems achieve an appropriate level of accuracy, robustness, and cybersecurity throughout their lifecycle. Applied to agents:
- Accuracy is measured at the outcome level, not only at the model level. A helpful-and-correct-looking output that causes a downstream wrong decision is not accurate. The Moffatt v. Air Canada incident is an accuracy failure by this reading.
- Robustness includes resilience to adversarial input, which for agents includes indirect prompt injection (Article 7), tool-response poisoning (Article 6), and multi-agent manipulation (Article 8).
- Cybersecurity includes the protection of the agent’s identity, its tools’ credentials, its memory stores, and its audit records.
The OWASP Top 10 for Agentic AI and MITRE ATLAS technique catalogues are the practical substrate for Article 15 cybersecurity compliance in agentic deployments. Sources: https://genai.owasp.org/resource/agentic-ai-threats-and-mitigations/ ; https://atlas.mitre.org/.
Article 26 — deployer obligations
Article 26 places specific obligations on deployers of high-risk AI systems. Relevant to agents:
- Use in accordance with instructions (deployer cannot repurpose an agent outside the scope the provider documented).
- Human oversight by persons with appropriate skills (Article 5 of this credential).
- Input-data relevance and representativeness (for agents, this includes the documents and external content the agent retrieves).
- Log-keeping where under the deployer’s control (the audit-record obligation, Article 10 of this credential).
- Cooperation with authorities.
- Incident reporting.
A fundamental-rights impact assessment (Article 27) applies for certain Annex III public-sector and private-sector uses. Where it applies, an agentic system’s autonomy is a relevant factor in the assessment.
Article 50 — transparency duty
Article 50 requires that providers design systems so that natural persons interacting with AI systems are informed that they are interacting with AI, unless this is obvious. Applied to agents:
- A customer-service agent must disclose that it is AI.
- A voice agent must inform the caller.
- The disclosure is not a one-line legal footer; it is a usability-design question.
Article 50 also covers labelling of deepfakes and of AI-generated content in contexts where disclosure is required. For agents that produce user-facing content, the labelling obligation applies.
Article 52 — manipulation and emotion-recognition prohibitions
Article 52 prohibits, with narrow exceptions, AI systems that use subliminal or manipulative techniques to distort behaviour in harmful ways. It also places restrictions on emotion-recognition systems and biometric categorisation systems. Agentic systems that incorporate emotion recognition or that adapt persuasive behaviour based on inferred user state should be reviewed against Article 52 before deployment.
NIST AI RMF — agentic translation
The NIST AI RMF (January 2023) is structured as four functions — GOVERN, MAP, MEASURE, MANAGE — and a set of categories and subcategories within each. The framework is voluntary in the United States but is widely adopted and, for U.S. federal acquisition, effectively required.
GOVERN — agentic translations
- GOVERN 2.1 (roles, responsibilities, lines of communication) — the authority chain of Article 4 of this credential.
- GOVERN 4 (culture and AI risk management) — the governance posture toward agent deployment decisions.
MAP — agentic translations
- MAP 1 (context) — the agent’s intended use, users, and environment.
- MAP 2 (categorisation) — the autonomy classification of Article 3.
- MAP 4 (risk identification) — the agentic risk taxonomy of Article 9.
MEASURE — agentic translations
- MEASURE 2.7 (safety) — the containment and kill-switch design of Article 11.
- MEASURE 2.8 (accountability) — the audit-record stack of Article 10.
MANAGE — agentic translations
- MANAGE 1.4 (incident response) — the incident-response playbooks of Article 11.
- MANAGE 4 (post-deployment monitoring) — the observability of Article 10.
NIST AI 600-1 — the GenAI Profile
The July 2024 Generative AI Profile catalogues twelve risks (§§2.1–2.12) with associated actions for each RMF function. Several are directly agentic:
- §2.8 Information Security — cybersecurity of agent deployments.
- §2.9 Intellectual Property — IP exposure in agent outputs.
- §2.11 Value Chain and Component Integration — the supply-chain risks of Article 13 of this credential.
Source: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf.
ISO/IEC 42001 — management-system obligations
ISO/IEC 42001:2023 is the international management-system standard for AI. It is auditable — an organisation can certify against it, and a certificate is a signal to counterparties and regulators that the organisation has a working management system.
Clauses relevant to agent operations:
- Clause 6.1.2 (AI risk assessment) — the risk taxonomy of Article 9 applied through the 42001 risk-assessment process.
- Clause 6.1.3 (AI risk treatment) — the controls of Articles 5, 6, 7, 10, 11 of this credential.
- Clause 8.1 (operational planning and control) — how agent deployments are planned and controlled in operation.
- Clause 9.1 (monitoring, measurement, analysis, and evaluation) — the observability of Article 10 and the review cadence of Article 2.
Source: https://www.iso.org/standard/81230.html.
Italian Garante Replika decision — an early European regulatory action
In February 2023, Italy’s Garante (data-protection authority) took action against Replika, a conversational AI chatbot. The Garante raised concerns about the system’s treatment of emotional interaction and about children’s access. The case predates the EU AI Act’s full application but is illustrative of the kinds of concerns a national supervisor brings. Source: https://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/docweb/9852214.
The Garante Replika action is a useful anchor for discussions of Article 50 and Article 52 applied to agentic systems. It demonstrates that regulators are prepared to act against agentic deployments even under pre-AI-Act authorities, and it previews the enforcement posture that will emerge under the Act itself.
The control-to-regulation mapping artifact
For each high-risk agent, the specialist produces a mapping artifact with one row per regulatory clause and one column per control. The mapping names where the control is implemented, who owns it, and how it is evidenced. Regulators reviewing a deployment will ask for such a mapping; assembling it after the ask is harder than building it during design.
A note on regulatory movement
The agentic regulatory environment is the fastest-moving area of AI regulation. Between the publication of this article (April 2026) and any subsequent re-read, expect:
- Delegated acts and guidance from the EU AI Office that address agentic systems specifically.
- Possible publication of a NIST agentic AI profile companion to AI 600-1.
- Possible ISO agentic-specific addenda or sector-specific standards.
- National-level rules (U.S. state, U.K. sectoral regulators, other jurisdictions) that adapt general frameworks to agentic specifics.
The specialist should schedule a quarterly refresh of the regulatory-mapping artifact against the latest guidance. Static compliance in a dynamic regulatory environment is a failure mode of its own.
Learning outcomes — confirm
A specialist who completes this article should be able to:
- Name the EU AI Act articles applicable to agents and translate each into concrete controls.
- Map the NIST AI RMF functions to agentic controls covered elsewhere in this credential.
- Name the ISO/IEC 42001 clauses relevant to agent operations.
- Produce a regulatory-mapping artifact for a described high-risk agent.
Cross-references
EATF-Level-1/M1.5-Art03-Building-an-AI-Governance-Framework.md— Core article on governance framework.- AITB-RCM credential — EU AI Act Risk Classification Specialist (classification depth).
- AITB-LAG credential — LLM Risk and Governance Specialist.
- Article 5 of this credential — EU AI Act Article 14 oversight design.
- Article 13 of this credential — cross-organisational and supply-chain agents.
Diagrams
- BridgeDiagram — regulation clauses on one side, concrete agentic controls on the other, with mapping beams.
- MatrixDiagram — 2×2 of EU deployed × high-risk classified mapping to obligation bundles.
Quality rubric — self-assessment
| Dimension | Self-score (of 10) |
|---|---|
| Technical accuracy (every article and clause cite verifiable) | 10 |
| Technology neutrality (regulatory anchors, not vendor frameworks) | 10 |
| Real-world examples ≥2 (EU AI Act text, NIST AI 600-1, ISO 42001, Garante Replika) | 10 |
| AI-fingerprint patterns | 9 |
| Cross-reference fidelity | 10 |
| Word count (target 2,500 ± 10%) | 10 |
| Weighted total | 92 / 100 |