COMPEL Specialization — AITM-AAG: Agentic AI Governance Associate Article 13 of 14
Definition. A cross-organisational agent is an agentic system that, in the course of its ordinary operation, acts on behalf of one organisation against systems, people, or agents of another. A supply-chain agent is a specific cross-organisational sub-case in which the agent is part of an end-to-end value chain — the agent of organisation A interacting with the agent or system of organisation B as part of a contracted workflow. The governance obligations do not end at the organisation’s perimeter; they extend across it, reciprocally.
Four sub-patterns appear in 2025-era enterprise deployments. The specialist learns to recognise each because they carry different governance needs.
| Sub-pattern | Example |
|---|---|
| Vendor-to-customer agent | Customer-service agent of Company A acting on the account of User B, a customer |
| Customer-to-vendor agent | Browser-use agent of User B interacting with Company A’s website |
| Organisation-to-organisation A2A | Procurement agent of Company A negotiating with sales agent of Company C |
| Agent marketplace | Task-broker agent routing to third-party agent providers |
The EU AI Act Articles 16 (provider obligations), 23 (importer obligations), 24 (distributor obligations), 25 (substantial modification), 26 (deployer obligations), and Annex IV (technical documentation) all place duties that interact with cross-organisational deployments. Supply-chain risk is explicitly catalogued in NIST AI 600-1 §2.11 Value Chain and Component Integration. Source: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf.
The governance axis: who controls the agent, who is the counterparty?
The governance analyst starts with a two-question screen.
- Is the agent ours or the counterparty’s?
- Is the counterparty ours (an internal affiliate) or a third party?
| Agent | Counterparty | Governance burden |
|---|---|---|
| Ours | Internal | Normal intra-organisational controls |
| Ours | Third party | Added contractual controls; reciprocal audit |
| Counterparty | Internal | The agent’s operator must meet our expectations |
| Counterparty | Third party | We have duties as a deployer of the counterparty’s agent |
The table crystallises the governance analyst’s attention. An “our agent, their system” deployment requires contractual controls on what our agent may do. A “their agent, our system” deployment requires our exposure controls — what the counterparty’s agent may touch in our environment and how we detect misbehaviour.
Contractual controls
A cross-organisational agent relationship should be governed by a written contract that names the following, at minimum:
- Identity and authentication. How each side proves its agent’s identity to the other. Mutual-TLS, signed tokens, or equivalent.
- Authorised scope. What actions the agent is authorised to perform on the counterparty’s systems. The scope is explicit; anything not explicit is prohibited.
- Rate limits. Maximum request rate, maximum concurrent sessions.
- Data handling. What data the agent may receive, transmit, retain; encryption in transit and at rest; purpose limitation.
- Audit reciprocity. Each side provides the other with audit records relevant to the joint activity, within a defined latency.
- Incident reporting. Each side notifies the other of any incident involving the joint workflow within a defined window (e.g., 24 hours for material incidents; less for security-critical ones).
- Change notification. Either side notifying the other of material changes to the agent’s configuration — model swap, tool-registry change, autonomy-level change.
- Liability and indemnification. Who bears the financial consequence of what kind of failure.
- Termination and wind-down. How the relationship ends cleanly, including decommissioning of identities and removal of shared memory or knowledge.
The contract is not boilerplate. An agent-specific schedule attaches to the commercial agreement.
Authentication and authorisation across the boundary
Classical service-to-service authentication (API keys, mutual-TLS, OAuth) applies. The addition for agentic communication is that the sender agent must be authenticated in addition to the sending organisation. A single organisational credential does not distinguish between a benign request from a sales agent and a compromised request from an exploited marketing agent. Per-agent credentials, scoped narrowly, allow the receiver to revoke one agent’s access without disabling the whole integration.
The OWASP Top 10 for LLM Applications and OWASP Agentic AI Threats and Mitigations both catalogue the classes of attack that exploit under-scoped authentication. Source: https://genai.owasp.org/resource/agentic-ai-threats-and-mitigations/. The governance analyst uses the OWASP catalogue as the threat-model checklist for cross-organisational integrations.
Payload sanitisation and trust boundaries
Content flowing between organisations is treated as untrusted. The trust-boundary review asks: what content reaches the agent’s reasoning context from the other organisation, and what sanitisation applies?
Specific defences:
- Content-provenance labels preserved across the boundary.
- Instruction-pattern detection applied to received content (indirect-prompt-injection defence).
- Schema validation on all structured payloads.
- Content-length and size caps to defend against resource exhaustion.
The Greshake et al. USENIX Security 2023 paper on indirect prompt injection documented the breadth of the attack class; cross-organisational deployments are particularly exposed because the content entering the agent’s context originates outside the controlled environment. Source: https://arxiv.org/abs/2302.12173.
Browser-use and computer-use agents
A special class of cross-organisational agent is the browser-use or computer-use agent — a system that operates a general-purpose client (a browser, a desktop) to interact with third-party systems as if a human user were doing so. The user organisation is in effect deploying an agent that acts across the open web, often against systems whose operators have not consented to automated-agent interaction.
Anthropic’s Computer Use (October 2024 release) and OpenAI’s Operator (January 2025 release) are named public examples of the pattern. Both vendors published documentation of containment approaches, user-in-the-loop regimes, and acceptable-use constraints. Sources: https://www.anthropic.com/news/3-5-models-and-computer-use ; https://openai.com/index/introducing-operator/. Google, Meta, and open-source builders on Llama or Mistral models with LangGraph, CrewAI, or custom code build equivalent capabilities.
The governance implications of browser-use agents include:
- Rate and behaviour that resembles scraping or automated interaction. Some counterparty sites prohibit automation in their terms of service; the agent’s use may be a contractual breach whether or not it is technically detectable.
- Credential handling. The agent uses the user’s authenticated session to act on third-party sites. The scope must be constrained (what the agent is authorised to do with each site), and credentials must not leak into the agent’s memory.
- Action scoping. Browser-use agents have access to every action a browser allows, which is effectively unbounded. Application-specific scopes are the engineering task.
- Audit and visibility. The user (the principal) needs to be able to see what the agent did and undo where possible.
Supply-chain agents
A supply-chain agent is one link in a longer chain. An agent of Organisation A calls a tool provided by Organisation B; Organisation B’s tool may itself be backed by an agent of Organisation B that calls systems of Organisation C. The chain can be long, and the governance question is where visibility ends.
Three governance principles apply.
Principle 1 — Chain transparency
Each link in the chain is named in the agent’s configuration. The agent’s governance pack (Article 14) records the external dependencies by organisation, service, and version.
Principle 2 — Chain-upward notification
A failure or change anywhere in the chain must propagate. When Organisation B’s tool changes its behaviour, Organisation A’s deployment is affected, and the notification arrives through the contractual change-notification clause. The reverse direction is also contractually obligated: Organisation A notifies Organisation B of changes in how Organisation A uses Organisation B’s tool.
Principle 3 — Chain-level risk assessment
The agent’s risk register (Article 9) includes supply-chain risks as a category. What is the worst-case failure if any chain component misbehaves? What is the failure-mode coverage of the contractual and technical controls across the chain?
Cross-organisational incident response
When an incident involves a cross-organisational agent, both sides participate in response. The contract’s incident-reporting clause drives the mechanics: who notifies whom, within what window, with what minimum content. The post-incident review is joint, because the lessons span both sides.
A one-sided review after a cross-organisational incident is a governance failure. The failure is often political — “we do not want to share details with the counterparty” — but the consequence is that the joint failure mode is not fully understood on either side and is likely to recur.
Learning outcomes — confirm
A specialist who completes this article should be able to:
- Name the four cross-organisational sub-patterns and the governance emphasis for each.
- Draft the agent-specific schedule to attach to a commercial agreement.
- Evaluate a browser-use or computer-use deployment against the governance implications above.
- Design a supply-chain governance map for an agent with three external dependencies.
Cross-references
EATP-Level-2/M2.4-Art11-Human-Agent-Collaboration-Patterns-and-Oversight-Design.md— practitioner depth on oversight, including cross-organisational operator roles.- Article 4 of this credential — delegation and authority chains (cross-boundary chains).
- Article 8 of this credential — multi-agent systems (A2A protocols across boundaries).
- Article 12 of this credential — regulatory obligations (Articles 16, 23, 24, 25, 26).
Diagrams
- OrganizationalMappingBridge — cross-organisational authority chain from principal organisation → agent platform → third-party destination with named controls.
- MatrixDiagram — 2×2 of “agent is ours” × “counterparty is ours” mapping governance burden.
Quality rubric — self-assessment
| Dimension | Self-score (of 10) |
|---|---|
| Technical accuracy (contractual and protocol descriptions sound) | 10 |
| Technology neutrality (Anthropic Computer Use, OpenAI Operator, Google, Llama/Mistral, multiple frameworks named) | 10 |
| Real-world examples ≥2 (Anthropic Computer Use, OpenAI Operator, Greshake et al.) | 10 |
| AI-fingerprint patterns | 9 |
| Cross-reference fidelity | 10 |
| Word count (target 2,500 ± 10%) | 10 |
| Weighted total | 92 / 100 |