This article provides governance professionals with the design patterns, structural options, and implementation guidance for building a multi-jurisdictional operating model.
The Operating Model Challenge
An organisation deploying AI across the European Union, United States, United Kingdom, Singapore, India, and the UAE faces a fundamental structural question: should each jurisdiction have its own governance programme, or should one global programme cover all jurisdictions?
The answer, almost universally, is neither. Fully decentralised governance creates duplication, inconsistency, and an inability to learn across jurisdictions. Fully centralised governance creates a bottleneck, lacks local expertise, and cannot respond to jurisdiction-specific requirements with adequate nuance.
The solution is a federated operating model: a strong central governance function that sets global standards, maintains the governance platform, and provides cross-jurisdictional intelligence — complemented by local governance capabilities that interpret and apply those standards within each jurisdiction’s regulatory context.
Three Operating Model Patterns
Pattern 1: Hub-and-Spoke
A central governance hub establishes the governance framework, maintains the policy library, operates the governance platform, and provides expertise centres for technical AI governance, regulatory intelligence, and stakeholder reporting. Jurisdictional spokes apply the global framework locally, manage jurisdiction-specific compliance requirements, interface with local regulators, and escalate issues that require global governance decisions.
When to use: Organisations with a dominant home jurisdiction and smaller operations in other jurisdictions. The hub is typically located in the home jurisdiction and has the deepest governance capability.
Strengths: Clear authority structure, efficient use of central expertise, consistent global standards.
Weaknesses: Can be perceived as imposing the home jurisdiction’s approach on others. Local spokes may lack authority and resources. Hub can become a bottleneck.
Pattern 2: Federated Network
Multiple regional governance centres operate with significant autonomy within a shared governance framework. A global governance board provides coordination, standard-setting, and dispute resolution, but does not dictate operational decisions to regional centres.
When to use: Global organisations with substantial operations in multiple regions, each with distinct regulatory environments. Common in organisations that operate across the EU, the Americas, and Asia-Pacific with significant scale in each.
Strengths: Respects regional regulatory and cultural differences. Distributes governance capacity closer to deployment contexts. Enables parallel operations without bottlenecks.
Weaknesses: Risk of divergence between regional approaches. Requires strong coordination mechanisms. More expensive than hub-and-spoke due to multiple capability centres.
Pattern 3: Hybrid Centre of Excellence
A central governance centre of excellence provides frameworks, tools, training, and advisory services. Local governance responsibilities are embedded within business units and regional teams. The centre of excellence does not have direct authority over local governance but influences through standards, training, and quality assurance.
When to use: Organisations where governance is embedded in operational teams rather than centralised. Common in organisations transitioning from ad hoc to structured governance, where imposing a centralised model would create resistance.
Strengths: Low organisational disruption. Governance embedded close to AI development teams. Scalable through training and tools.
Weaknesses: Weakest authority model — centre of excellence can be ignored. Inconsistency across teams. Relies heavily on organisational culture rather than structural enforcement.
Key Components of the Operating Model
1. Authority and Decision Rights
Define explicitly who makes which governance decisions:
Global decisions (held centrally): Global governance framework and policy, risk classification methodology, evidence catalogue and quality standards, governance platform standards, incident escalation thresholds, board and stakeholder reporting.
Regional decisions (delegated to jurisdictional teams): Jurisdiction-specific compliance requirements, local regulatory engagement, local stakeholder consultation, jurisdiction-specific incident response, local training and capability building.
Shared decisions (joint between global and local): Risk classification of systems operating across jurisdictions, cross-border data flow governance, multi-jurisdictional incident response, resource allocation across governance functions.
2. Governance Forum Structure
Establish forums at multiple levels:
Global AI Governance Committee. Meets quarterly. Sets global policy, reviews portfolio risk, approves major governance decisions, and reports to the board. Membership: Chief AI Officer or equivalent, regional governance leads, General Counsel, CTO, CISO, and independent advisor.
Regional Governance Boards. Meet monthly. Apply global policy to regional context, manage local regulatory compliance, review regional incidents, and escalate issues to the global committee. Membership: regional governance lead, local legal counsel, regional business leaders, and local technical leads.
System-Level Governance Reviews. Conducted per system at key lifecycle gates. Apply the governance framework to individual AI systems. Membership: system owner, governance reviewer, technical lead, and domain expert.
3. Cross-Jurisdictional Intelligence Sharing
The operating model must enable learning across jurisdictions:
Regulatory intelligence hub. A central repository of regulatory developments across all operating jurisdictions, curated by the global governance function and accessible to all regional teams.
Incident learning network. Incidents and near-misses from one jurisdiction should be shared (appropriately anonymised) across the network so that all jurisdictions can learn.
Best practice exchange. When one jurisdiction develops a particularly effective approach to a governance challenge (e.g., a streamlined stakeholder consultation process, an efficient evidence collection method), it should be captured and shared across the network.
4. Talent and Capability Model
Define the governance capability requirements at each level:
Global team capabilities: Framework design and evolution, regulatory horizon scanning across all jurisdictions, governance platform management, board and stakeholder reporting, cross-jurisdictional coordination, and quality assurance.
Regional team capabilities: Local regulatory interpretation and compliance, local stakeholder engagement, jurisdiction-specific risk assessment, local incident management, and regulatory relationship management.
Embedded capabilities (within AI development teams): Governance awareness and first-line compliance, evidence creation (model cards, impact assessments, fairness evaluations), and issue escalation to governance teams.
5. Technology and Platform
The governance platform must support multi-jurisdictional operations:
Jurisdiction-aware system registry. Each AI system is tagged with its operating jurisdictions, enabling automatic identification of applicable requirements.
Configurable requirement sets. The platform can apply different requirement sets based on jurisdiction, risk tier, and lifecycle stage — without requiring separate platform instances per jurisdiction.
Multi-language support. Governance artefacts, reports, and interfaces must be available in the languages of all operating jurisdictions.
Audit trail with jurisdictional context. Every governance action is logged with the jurisdiction context, enabling jurisdiction-specific audit response.
Implementation Roadmap
Phase 1: Assessment (Months 1–2)
Map the current state: How many jurisdictions? Which operating model pattern best fits the organisation’s structure? What governance capabilities exist at global and local levels? What are the critical gaps?
Phase 2: Design (Months 2–4)
Design the operating model: select the pattern, define authority and decision rights, design the forum structure, specify capability requirements, and document the target operating model.
Phase 3: Foundation (Months 4–8)
Build the foundation: establish the global governance committee, appoint regional governance leads, deploy the governance platform with multi-jurisdictional configuration, and create the core policy library.
Phase 4: Operationalisation (Months 8–14)
Operationalise the model: conduct governance reviews for all high-risk systems under the new model, establish regulatory intelligence sharing, run cross-jurisdictional incident response exercises, and begin governance reporting at all levels.
Phase 5: Maturation (Months 14+)
Refine and mature: measure operating model effectiveness (governance throughput, compliance posture, stakeholder satisfaction), identify and address friction points, and evolve the model as the regulatory landscape and organisational AI portfolio change.
The operating model is never finished — it evolves continuously as jurisdictions add or change regulations, the organisation enters new markets, and the AI portfolio grows. Building adaptability into the model from the start is more important than perfecting the initial design.
This article is part of the COMPEL Body of Knowledge v2.5 and supports the AI Transformation Governance Professional (AITGP) certification.