Cross-Cutting Enabler
Agent Governance
Autonomy, safety, and control for agentic AI systems.
Agent Governance addresses the unique risks and controls required for agentic AI — systems that plan, act, and use tools with varying degrees of autonomy. It covers autonomy classification, tool access controls, HITL design, multi-agent orchestration, and the guardrails that keep agentic systems safe in production. In COMPEL, agent governance is a first-class cross-cutting enabler alongside value realization and operational readiness.
Core articles
- M1.2Evaluating Agentic AI: Goal Achievement and Behavioral Assessment
- M1.2Agent Learning, Memory, and Adaptation: Governance Implications
- M1.4Agentic AI Architecture Patterns and the Autonomy Spectrum
- M1.4Tool Use and Function Calling in Autonomous AI Systems
- M1.5Grounding, Retrieval, and Factual Integrity for AI Agents
- M1.5Safety Boundaries and Containment for Autonomous AI
- M2.2Agentic AI Maturity Assessment: Extending the 18-Domain Model
- M2.4Human-Agent Collaboration Patterns and Oversight Design
- M2.4Operational Resilience for Agentic AI: Failure Modes and Recovery
- M2.5Designing Measurement Frameworks for Agentic AI Systems
- M2.5Audit Trails and Decision Provenance in Multi-Agent Systems
- M2.5Agentic AI Cost Modeling: Token Economics, Compute Budgets, and ROI
- M3.3Enterprise Agentic AI Platform Strategy and Multi-Agent Orchestration
- M3.4Agentic AI Governance Architecture: Delegation, Authority, and Accountability
- M3.4Agentic AI Risk Taxonomy and Enterprise Risk Framework Extension
- M4.3Cross-Organizational Agentic AI Governance and Policy Frameworks
- M4.5Industry Standards for Agentic AI: ISO, NIST, and Emerging Frameworks