Skip to main content
AITL M9.4-Art04 v1.0 Reviewed 2026-04-06 Open Access
M9.4 M9.4
AITL · Leader

Multi-Jurisdictional AI Governance Strategy: Global Baseline + Regional Overlays

Multi-Jurisdictional AI Governance Strategy: Global Baseline + Regional Overlays — Transformation Design & Program Architecture — Strategic depth — COMPEL Body of Knowledge.

18 min read Article 4 of 3

COMPEL Body of Knowledge — Cross-Border and Regulatory Strategy Series (Cluster D) Strategic Operating Model for Multi-Jurisdictional AI


Why multi-jurisdictional strategy {#why}

Ten years ago, an AI governance program could be built to the home-country regulator’s expectations and exported on the assumption that the rest of the world would converge. That assumption has collapsed. The global AI regulatory landscape in 2026 is fragmented, extraterritorial, and moving.

Fragmented because every major jurisdiction now has at least one AI-specific instrument in force or imminent: the EU AI Act, US Executive Order 14110 and OMB memoranda, a rising count of US state laws, the UK’s principles-based pro-innovation approach with AI Assurance guidance, Singapore’s MGF and AI Verify, Canada’s AIDA under Bill C-27, Brazil’s PL 2338/23, and China’s Interim Measures for Generative AI plus the Deep Synthesis rules. These encode different philosophies — rights-based, risk-tiered, principles-based, registration-first — and demand different evidence in different forms.

Extraterritorial because most regimes reach beyond their borders. The EU AI Act applies to providers outside the EU if their output is used in the EU. GDPR applies to any processor of EU personal data regardless of location. The Singapore MGF is voluntary but Verify-aligned testing is increasingly a procurement precondition. The Chinese GenAI Measures apply to any generative service offered to users in mainland China. A US operator is a global operator whose AI systems touch every economy where its products, data, and users reside.

Moving because enforcement is arriving. The EU AI Office has begun supervisory correspondence. US state attorneys general are pursuing cases under existing unfair-practice statutes. The Cyberspace Administration of China has deregistered non-compliant generative models. The assumption that AI regulation is “still coming” has been overtaken.

Trying to stand up N parallel programs — one per jurisdiction — produces a combinatorial explosion of controls, evidence, audits, and exceptions. What is needed instead is a single operating model with one global baseline, lightweight regional overlays, and a disciplined regulatory-intelligence process.

Global-baseline + regional-overlay pattern {#pattern}

The design pattern is simple to state and demanding to execute. One baseline, many overlays, one evidence fabric, one council, many committees.

Single management system — ISO 42001. The enterprise operates one AI Management System (AIMS) grounded in ISO/IEC 42001:2023, specifying the policies, roles, risk process, operational controls, and continual-improvement loop for AI across the whole organisation. It is not jurisdiction-specific. It is the backbone.

Single risk catalogue — NIST AI RMF. Risks are enumerated and scored using NIST AI RMF 1.0 and the Generative AI Profile. The four functions — Govern, Map, Measure, Manage — provide the common vocabulary. Every AI use case maps into the same risk catalogue, so the Global AI Council reviews one risk register, not seven.

Regional overlay packs. Each in-scope jurisdiction has a small, versioned overlay pack specifying: (1) additional controls required there, (2) evidence format and language, (3) filings or registrations, (4) data residency expectations, (5) local escalation contacts, and (6) enforcement trigger signals. An overlay pack is lean by design — only the delta from the baseline. This keeps regional overhead proportionate to the actual regulatory difference.

One evidence fabric, regionally sharded. Model cards, system cards, DPIAs, conformity assessments, and audit logs live in a single platform, sharded so that jurisdiction-tied artifacts are stored in-region where law requires it. Cross-regional metadata lives in the global index.

One global council, many regional committees, business-unit liaisons. Governance is federated. The Global AI Council owns the baseline; Regional AI Committees own their overlays and approve in-region deployments; Business-unit AI Liaisons translate rules into day-to-day decisions. A new AI system is assessed repeatably: baseline applies; the decision tree identifies overlays; overlays add a minimal delta; the package is visible in one risk register.

Jurisdiction decision tree {#decision-tree}

Every new AI system passes through a four-question decision tree whose answers determine which overlays activate.

Q1 — Where is the system deployed and accessible? Every jurisdiction where the system is offered to users, integrated into business processes, or produces outputs consumed locally. Usually two to six for an enterprise product.

Q2 — Whose personal data does the system process? Data-subject jurisdiction drives GDPR, UK GDPR, LGPD, PIPL, and PIPEDA applicability independently of deployment location — training data, inference inputs, telemetry, user-generated content.

Q3 — Who are the users, and do any belong to protected or sensitive categories? Employees (employment law), minors (child-online-safety), healthcare patients (HIPAA, EU MDR), or protected classes subject to anti-discrimination overlays such as NYC Local Law 144.

Q4 — What is the use case and risk category? Classify against EU AI Act risk tiers (prohibited, high-risk Annex III, limited-risk, minimal), NIST RMF taxonomy, and each overlay’s local categorisation. A credit-scoring model in the EU is Annex III high-risk with strict conformity-assessment; in Singapore it triggers MGF testing; in the US it triggers ECOA and state fairness rules.

The output is an applicability matrix — one row per jurisdiction, one column per control family, populated with baseline-only, baseline-plus-overlay, or overlay-prohibited. This matrix is the compliance plan for that AI system and feeds the evidence requirements and filings calendar.

Regional overlay packs {#overlays}

Each overlay pack is a versioned document with five sections: (1) in-scope triggers, (2) delta controls, (3) evidence format, (4) filings, (5) enforcement signals. The sketches below are summaries; the living packs are maintained by the regional committees.

European Union overlay

Triggers. System placed on EU market, used in EU, output consumed in EU, or processes EU personal data.

Delta controls. EU AI Act Annex III classification test for high-risk; if high-risk, full conformity-assessment package including risk-management system (Art. 9), data governance (Art. 10), technical documentation (Art. 11), record-keeping (Art. 12), transparency (Art. 13), human oversight (Art. 14), accuracy/robustness/cybersecurity (Art. 15), post-market monitoring (Art. 72), serious-incident reporting (Art. 73). GDPR Art. 35 DPIA for personal-data processing. Digital Services Act overlays when operating as platform, intermediary, or very-large online platform.

Evidence format. Technical documentation in an official EU language accepted by the national competent authority; Declaration of Conformity; CE-marking readiness for high-risk AI; GDPR Art. 30 records of processing.

Filings. Registration in EU database for high-risk AI systems (Art. 71); DPIA consultations with national DPA for residual high risk; notified-body engagement for specific Annex III categories.

Enforcement signals. AI Office supervisory inquiry; national market-surveillance authority request; DPA Art. 58 audit notice.

United States overlay — federal and state

Triggers. System used by or sold to the United States federal government; system deployed in states with active AI laws; employment, credit, housing, insurance, or healthcare use cases in the United States.

Federal delta controls. Executive Order 14110 safety and security requirements for dual-use foundation models; OMB Memorandum M-24-10 risk management for federal AI (covers rights-impacting and safety-impacting use cases) plus the companion acquisition memorandum; NIST AI RMF adoption; for certain compute thresholds, reporting obligations to the Department of Commerce; sector-specific rules from FTC, CFPB, EEOC, FDA, HHS.

State delta controls. California — SB 1001 bot-disclosure, AB 2013 training-data transparency, AB 1008 privacy extension, and proposed CPPA automated-decision-making rules. Colorado — the Colorado AI Act (SB 24-205) with obligations for developers and deployers of high-risk AI and consumer notice. New York City — Local Law 144 bias audits for automated employment decision tools. Illinois — BIPA for biometric data, AI interview notice rules. Texas and Utah — consumer-protection AI disclosures.

Evidence format. For federal use, artifacts aligned to NIST AI RMF profiles and OMB reporting templates. For states, bias-audit reports signed by independent auditors (NYC LL 144), consumer disclosures, algorithmic impact assessments (Colorado).

Filings. NYC LL 144 annual bias audit publication; Colorado consumer notices; federal agency AI use-case inventory (for federal use).

Enforcement signals. FTC enforcement action; state attorney general inquiry; federal-agency AI impact assessment request.

United Kingdom overlay

Triggers. AI system used in the UK, processing UK personal data, or sold to UK public sector.

Delta controls. Regulator-led principles from the AI Regulation White Paper applied via sector regulators (ICO, FCA, CMA, MHRA, Ofcom). Algorithmic Transparency Recording Standard (ATRS) for public-sector systems. DPIA under UK GDPR for high-risk processing. AI Assurance guidance published by CDEI and the AI Safety Institute for frontier models.

Evidence format. ATRS records published to the public registry for public-sector use; ICO DPIA documentation; sector-specific evidence matching the regulator’s guidance (for example FCA Consumer Duty evidence for financial-services AI).

Filings. ATRS entry for in-scope public-sector systems; voluntary AI Standards Hub alignments.

Enforcement signals. ICO enforcement notice; sector regulator consumer-duty or fair-treatment inquiry.

Singapore overlay

Triggers. AI deployed to Singapore users; procurement preconditions requiring MGF alignment; public-sector AI in Singapore.

Delta controls. Alignment with the Model AI Governance Framework — second edition plus Generative AI addendum — including internal governance structures, human-AI decision-making model, operations management, stakeholder communication. AI Verify testing for in-scope systems. PDPA compliance for personal data.

Evidence format. AI Verify test reports; MGF self-assessment; PDPA records.

Filings. No mandatory registration today; AI Verify reports are shared with procurement counterparties.

Enforcement signals. PDPC enforcement action on data; IMDA guidance updates; procurement-driven testing requests.

Brazil overlay

Triggers. AI offered in Brazil or processing Brazilian personal data.

Delta controls. PL 2338/23 once in force — risk classification, impact assessments, transparency, human oversight, governance obligations proportional to risk. LGPD Art. 20 automated-decision rights; DPO appointment; LGPD records of processing.

Evidence format. Algorithmic impact assessments (AIIA) in Portuguese; LGPD records.

Filings. ANPD registrations for data transfers; forthcoming PL 2338 registrations for high-risk systems.

Enforcement signals. ANPD inspection; consumer-protection (Senacon) inquiry.

Canada overlay

Triggers. AI deployed to Canadian users, federally regulated sectors, or processing Canadian personal data.

Delta controls. AIDA (Bill C-27) — obligations for high-impact systems including measures to mitigate risks of harm and biased output, record-keeping, public plain-language descriptions, and designated-AI-Commissioner oversight. PIPEDA and provincial privacy laws. Federal Directive on Automated Decision-Making (Treasury Board) for federal-government AI with Algorithmic Impact Assessment levels I–IV.

Evidence format. Treasury Board AIA questionnaire for federal use; AIDA general records; PIPEDA privacy assessments.

Filings. AIDA public plain-language descriptions once in force; AIA publication for federal systems.

Enforcement signals. AI and Data Commissioner inquiry; OPC privacy investigation; Treasury Board AIA review.

China overlay

Triggers. Generative AI services offered in mainland China; deep-synthesis services; algorithmic recommendation services; AI processing data of individuals in China.

Delta controls. Interim Measures for Generative AI Services — content safety, training-data legitimacy, security assessments, user identity verification, content labelling. Deep Synthesis Provisions — labelling, consent for face/voice, provider registration. Algorithmic Recommendation Provisions — filings in the CAC algorithm registry, user opt-out, age-appropriate design. PIPL for personal data with cross-border transfer limits. Data Security Law and national-security review for important data.

Evidence format. Security self-assessment reports submitted to CAC; algorithm filing documentation in Simplified Chinese; training-data inventories; labelling mechanisms.

Filings. CAC algorithm registry entries; security-assessment filings; generative-AI filings; cross-border data-transfer security assessment where thresholds are met.

Enforcement signals. CAC take-down notice; model deregistration; Ministry of Industry and Information Technology guidance.

Data residency and evidence localisation {#data-residency}

The where of evidence matters as much as the what. Three residency archetypes cover most enterprise cases.

Hard residency. Artifacts must be stored and processed inside the jurisdiction. China training data for registered generative models is hard-resident; EU AI Act technical documentation for high-risk systems must be retrievable from the EU and producible in an official EU language (operationally hard-resident).

Copy-in-region residency. A usable copy must reside in-region while the authoritative artifact lives elsewhere. GDPR Art. 30 records, most DPIA documentation, and audit logs follow this pattern where the law permits transfers under an adequacy decision, Standard Contractual Clauses, or Binding Corporate Rules.

Access residency. Artifacts can be stored globally as long as they are rapidly accessible to local authorities in the local language. Singapore AI Verify reports, UK ATRS entries, and Canada AIA publications follow this pattern.

The evidence platform encodes the rule in metadata — every artifact carries a residency tag (hard/copy/access), a region tag, and a language-availability flag. The platform refuses writes that would violate the tag and flags language gaps before regulator deadlines. Storage is sharded regionally: the EU shard in EU cloud regions with EU-resident operators; the China shard on an in-country provider with local entity; US and global shards cover the remainder. Cross-region metadata is the thin global layer that lets the Global AI Council see the whole estate in one place.

Governance body model {#governance}

Federated governance works only when responsibility is unambiguous at each layer.

Global AI Council. Monthly; chaired by the Chief AI Officer or Chief Risk Officer; members include Legal, Privacy, Security, Ethics, HR, and business-unit representation. Owns the baseline (ISO 42001 AIMS, NIST AI RMF risk register), approves material policy changes, arbitrates cross-regional conflicts, and owns the board-level reporting line.

Regional AI Committees. One per in-scope jurisdiction or cluster (for example an EU committee, a North America committee for US+Canada). Monthly; includes regional Legal, Privacy, and business leadership plus the local DPO where required. Owns the overlay pack, approves in-region deployments, signs off on regional filings, maintains regulator relationships, and escalates unresolvable conflicts upward.

Business-unit AI Liaisons. One per business unit; named individual; operational role. Translates baseline and overlay rules into day-to-day decisions, maintains the unit-level AI use-case inventory, and routes applicability questions to the relevant Committee or Council.

Line-of-defence overlay. These bodies sit inside a three-lines model. First line is product and engineering. Second line is the AI governance function (Council + Committees + Liaisons). Third line is Internal Audit, empowered to audit baseline and any overlay. External assurance (ISO 42001 certification, AI Verify testing, notified-body conformity assessment) is the fourth layer where applicable.

A common failure mode: setting up regional committees without empowering them. If the Global Council second-guesses every regional approval, the model collapses into a bottleneck. The design rule is Council owns baseline; Committees own regions; conflicts ladder cleanly.

Change-detection for regulatory updates {#change-detection}

Regulation is moving; the program must move with it. The enterprise runs a regulatory-radar process on three cadences.

Weekly scan. A structured sweep of primary regulator feeds — EU AI Office, national DPAs, FTC, CFPB, state AGs, ICO, CMA, IMDA, PDPC, CAC, ANPD, Canada’s AI and Data Commissioner. Semi-automated with a regulator-feed aggregator and AI-summariser; human review for materiality. Legal intelligence feeds — Bloomberg Law, Westlaw, Covington Regulatory Watch, Latham Global AI Tracker — supplement the primary scan.

Monthly synthesis. The Regulatory Intelligence Lead produces a briefing for the Global Council: what changed, where, with what likely impact on which AI systems. Items are rated informational, monitor, or action-required. Action-required items are assigned to a Regional Committee with a remediation date.

Quarterly overlay refresh. Each Regional Committee formally refreshes its overlay pack — new rules incorporated, superseded rules retired, applicability-matrix template updated. Business units are notified of material changes and in-flight AI systems are rescreened.

Emergency path. When a regulator issues a final rule, a precedent-setting enforcement action, or binding guidance that changes risk posture, an out-of-cycle overlay update fires within five business days. The Global Council is briefed within ten. Emergency updates are rare (six to ten per year) but they are the events that create or prevent breach.

The program tracks one headline metric: regulatory latency — median days from regulator publication to overlay update. Best-in-class programs hold to thirty days.

Conflict resolution {#conflicts}

Incompatibilities between jurisdictions are predictable. A three-step ladder handles them.

Step one — technical interpretation. Most apparent conflicts dissolve under careful reading. GDPR Art. 22 automated-decision rules and US employment laws often look incompatible but turn out to demand similar human-review mechanics. Send the perceived conflict to a combined legal-technical working group before declaring it real. Document the interpretation in the evidence fabric.

Step two — regional segmentation. If the conflict is real, segment the AI system by region — different model versions, data pipelines, feature flags, retention windows. A recommendation engine can serve one content-filter configuration to EU users and a different one to US users. A generative model can refuse categories of prompts in China that it answers elsewhere. Segmentation is an engineering cost, but it is finite.

Step three — risk-accepted exception. Where segmentation is infeasible, the conflict becomes strategic: which regulator’s expectation do we meet, and what is the residual risk of non-compliance with the other? A Global AI Council decision with board visibility, time-bounded, logged in the exception register, reviewed quarterly. Many enterprises set an explicit exception budget at board level.

Governance hygiene. Every conflict-resolution decision is documented: the perceived conflict, the technical read, the segmentation design (or the reason it is infeasible), the residual risk, and the approval. Over time this record becomes the program’s most valuable artifact — the difference between an audit that ends with clarification and one that ends with enforcement.

COMPEL stage mapping {#compel-mapping}

COMPEL stageMulti-jurisdictional activity
CalibrateInventory in-scope jurisdictions · identify applicable regulators · baseline regulatory-readiness assessment
OrganizeStand up Global AI Council, Regional Committees, BU Liaisons · commission overlay packs · assign Regulatory Intelligence Lead
ModelApply decision tree to each use case · build applicability matrix · design regional segmentation where needed
ProduceBuild with residency-aware evidence fabric · regional sharding · language-ready documentation templates
EvaluateRun AI Verify, notified-body, and bias-audit testing per overlay · internal audit across regions
LearnQuarterly overlay refresh · enforcement-signal review · conflict-ledger post-mortems

Evidence artifacts {#evidence}

  • Overlay pack per jurisdiction (versioned)
  • Applicability matrix per AI system
  • Global AI risk register (NIST AI RMF taxonomy)
  • ISO 42001 AIMS manual and statement of applicability
  • Residency-tagged evidence fabric inventory
  • Regulator filings register (EU database, CAC registry, NYC LL 144 audits, ATRS entries, AIA records)
  • Conflict-resolution decision ledger
  • Regulatory-radar weekly scan and monthly synthesis archive
  • Exception register with board-visible counters
  • Cross-border data-transfer basis register (adequacy, SCCs, BCRs, security assessments)
  • Language-availability matrix of technical documentation

Metrics {#metrics}

  • Regulatory latency — median days from regulator publication to overlay-pack update. Target < 30 days for material rules, < 7 for emergency.
  • Applicability-matrix completeness — percentage of in-production AI systems with a current matrix. Target 100% for material systems.
  • Overlay-pack freshness — days since last quarterly refresh per region. Target < 95 days.
  • Cross-regional evidence coverage — percentage of in-scope artifacts satisfying residency and language requirements. Target 100% for hard-resident.
  • Exception register size — count and aging of live risk-accepted exceptions. Reviewed quarterly by the Council.
  • Enforcement-signal lead time — hours from a regulator enforcement action in-region to internal-review-initiated. Target < 48 hours.
  • Regional-committee throughput — median days from deployment request to regional approval. Target < 10 business days for baseline-only use cases.

Risks if skipped {#risks}

Enterprises that attempt to run AI globally without a multi-jurisdictional operating model encounter a predictable sequence of failure modes:

  • Combinatorial explosion. N parallel programs mean N audit cycles, N evidence platforms, N vocabulary sets.
  • Extraterritorial surprise. A US-shipped product triggers EU AI Act obligations no-one owned; a generative feature is deemed a generative service under the Chinese Interim Measures with no filing in place.
  • Evidence mismatch. Artifacts exist but not in the required form, language, or location when a regulator asks. Work done but not provable.
  • Conflict paralysis. A regional incompatibility halts a launch for months because no conflict-resolution ladder exists.
  • Board exposure. Regulators increasingly call the board. Without a global council view, directors cannot answer basic questions about the AI estate’s regulatory posture.
  • Loss of market access. Some jurisdictions now condition market access on filings or testing — once the door closes, opening it again is months of work.

The fix is neither exotic nor expensive: one baseline, thin overlays, clear councils, disciplined radar. Leaders that install it in 2026 will operate AI globally in 2028 with confidence; laggards will spend 2028 unwinding incidents.

How to cite

COMPEL FlowRidge Team. (2026). “Multi-Jurisdictional AI Governance Strategy: Global Baseline + Regional Overlays.” COMPEL Framework by FlowRidge. https://www.compelframework.org/articles/seo-d4-multi-jurisdictional-ai-governance-strategy/

Frequently Asked Questions

Why not just comply with the strictest jurisdiction and call it a day?
Strict-plus is a useful floor, but it fails on three counts. First, some requirements are not strictness-ordered — China mandates algorithm filings that EU law does not, and the EU bans practices that Singapore permits. Second, strict-plus creates unnecessary cost in low-risk regions. Third, evidence must still be produced in the form each regulator expects, in the language each regulator reads. A baseline-plus-overlay model is cheaper and more defensible.
How many overlays does an enterprise actually need?
Most multinationals need five to eight active overlays — EU, United States (federal plus a short list of states), United Kingdom, Singapore, Canada, Brazil, and China cover about ninety percent of the global AI exposure for Fortune 500 operators. Australia, Japan, South Korea, UAE, and India are the most common next additions.
Who owns the baseline and who owns the overlays?
The global AI Council owns the baseline — one ISMS-style management system grounded in ISO 42001 and NIST AI RMF. Each regional AI Committee owns its overlay pack — the delta controls, evidence formats, languages, and filings specific to that jurisdiction. Business-unit AI liaisons consume both layers and route exceptions upward.
What happens when two jurisdictions have incompatible requirements?
Route the conflict through the conflict-resolution ladder. Start with a narrow technical read — often the conflict dissolves under careful interpretation. If the conflict is real, segment the deployment by region (different model versions, different data flows). If segmentation is impossible, escalate to the Global AI Council for a risk-accepted exception with board visibility.
How fast does the regulatory radar need to be?
Weekly scan, monthly synthesis, quarterly overlay refresh. Emergency updates when a regulator publishes a final rule or when enforcement action signals a new interpretation. Most breach-inducing gaps come not from missing the rule but from missing the interpretive guidance that follows it six to twelve months later.
Where must evidence physically reside?
EU personal data and AI Act technical documentation for EU-deployed systems must be retrievable from within the EU and producible to competent authorities in the local language. China requires in-country storage of training data for generative models filed under the Interim GenAI Measures. Canada, Brazil, and Singapore have softer residency expectations that are satisfied by accessible copies plus documented cross-border transfer basis.