COMPEL Body of Knowledge — Regulatory Bridge Series Cluster A Flagship Article — Multi-Jurisdictional Harmonization
Why harmonization matters {#why}
An enterprise that ships AI-enabled products or services across more than two jurisdictions is already operating inside a regulatory patchwork. A consumer-lending model deployed in the EU triggers EU AI Act Annex III high-risk obligations. Offered to NYC employers, it triggers Local Law 144’s AEDT bias-audit and candidate-notice rules. Scored against Colorado employees, it triggers Colorado SB 205 duty-to-accommodate and impact-assessment obligations. Retrained on EU data and shipped to Brazil, it triggers PL 2338/23 registration. The foundation model underneath all of these triggers the EU AI Act Article 55 GPAI obligations, the US EO 14110 reporting threshold, and China’s Interim GenAI Measures if any output reaches users in China.
Running ten parallel compliance programs produces five predictable failures:
- Evidence duplication at unsustainable cost. A single impact assessment is written ten different ways, once per regulator. Documentation teams spend 40 to 60 percent of cycles reformatting rather than improving controls.
- Contradictory implementation. Regulation A says “disclose bias audit results publicly.” Regulation B says “protect audit results as confidential business information.” Teams resolve these with ad-hoc workarounds that neither auditor accepts.
- Operating-model fragmentation. Risk registers, incident logs, and model inventories fork per jurisdiction. The board sees ten different AI risk views and cannot form a single picture.
- Stale controls. Every regulation evolves. Without a shared baseline, every update triggers a ten-program change cycle.
- Market-access failure. Missed deadlines block revenue. Enforcement fines compound (EU AI Act Article 99 alone carries fines up to EUR 35M or 7% global turnover).
Harmonization solves these failures by designing the compliance operating model from first principles around a single control library, a single evidence portfolio, and a single governance operating model. Regulator-specific deltas become overlays, not forks.
Jurisdiction comparison matrix {#jurisdictions}
The table below maps ten major jurisdictions against eight obligation types. Read it horizontally to see what a given regulator expects; read it vertically to see how one obligation varies across regulators. The matrix is the starting point for designing the control library in the next section.
| Jurisdiction / Regulation | Risk classification | Transparency to users | Technical documentation | Human oversight | Post-market monitoring | Data governance | Evidence retention | Maximum fines |
|---|---|---|---|---|---|---|---|---|
| EU AI Act (Reg 2024/1689) | 4-tier: unacceptable / high (Annex III) / limited / minimal + separate GPAI tier | Art. 50: AI-interaction disclosure, deepfake labelling, emotion-recognition notice | Art. 11 + Annex IV: full technical file, 13 mandated sections | Art. 14: design for effective oversight, right to intervene, clear interface | Art. 72: post-market monitoring plan, serious incident reporting within 15 days | Art. 10: training / validation / testing datasets, bias mitigation, documented provenance | Art. 18: 10 years after last placement on market | EUR 35M or 7% global turnover (prohibited practices); EUR 15M or 3% (high-risk breaches) |
| US Federal — EO 14110 + OMB M-24-10 | Purpose-based: rights-impacting / safety-impacting per M-24-10; 10^26 FLOPs GPAI threshold | M-24-10: public inventory of federal AI use cases, notices to affected individuals | Dual-use model reports to Commerce; AI impact assessments per M-24-10 | M-24-10: human in loop for rights-impacting uses in federal agencies | M-24-10: ongoing monitoring with documented metrics | EO 14110 §10.1(b): data provenance for synthetic content | Agency records schedules (typically 3 to 7 years); indefinite for safety-critical | No direct civil penalty (pre-federal-legislation); procurement disqualification and OIG findings |
| California — AB 2013 + SB 1047 residuals | AB 2013: GenAI training-data disclosure for any model; SB 942 / successor: provenance watermarking | SB 942: AI-provenance disclosures for generative output; AB 2013: training-data summaries | AB 2013: high-level training-data documentation (sources, licensing, PII handling) | Sector-specific (insurance, credit, healthcare) | Under development in CPPA ADMT regulations | AB 2013: dataset provenance and PII summaries mandatory | 5 years minimum; CPPA ADMT proposes 7 | Civil penalties up to USD 25,000 per violation (AB 2013); higher under sectoral laws |
| Colorado AI Act (SB 205, eff. Feb 2026) | Consequential decisions: employment, education, finance, housing, essential services, government, healthcare, insurance, legal | Pre-decision notice to consumers; post-decision explanation and appeal path | Annual impact assessment per high-risk AI; developer documentation to deployers | Reasonable care duty; documented review of adverse decisions | Impact assessment updated within 90 days of material change | Bias-risk assessment across protected classes | 3 years minimum for impact assessments | Attorney General enforcement; CUPA penalties up to USD 20,000 per violation |
| New York City — LL 144 (AEDT) | Scope: automated employment decision tools used for hiring or promotion | Candidate notice 10 business days before use; data-type disclosure | Published summary of independent bias audit | Not mandated directly (focus is bias, not oversight design) | Bias audit must be re-run at least annually | Categories of data used must be disclosed | Bias audit results posted publicly for 6 months minimum | Civil penalty USD 500 first violation, up to USD 1,500 subsequent; per-candidate per-day |
| UK — AI Regulation White Paper + AI Bill (draft) | Context-based, 5 cross-sector principles (safety, transparency, fairness, accountability, contestability); regulator-led | Principle 2: transparency appropriate to context; ICO / Ofcom / FCA issue sector rules | Principle 4: documented accountability; sector regulators set artifact requirements | Principle 5: contestability and redress; human review for high-impact automated decisions | Sector-regulator driven; AISI testing for frontier models | Principle 3: fairness including dataset bias assessment | Sector-specific (FCA 5 years, ICO 6 years for DPIAs) | Currently via existing regulators: ICO up to GBP 17.5M or 4% turnover; draft AI Bill proposes dedicated regime |
| Singapore — Model AI Governance Framework (MGF) + GenAI Framework | Voluntary, risk-based matrix: severity × probability across 4 tiers | MGF: explicit disclosure when AI is material to decision; veracity labels for GenAI | AI Verify Foundation toolkit: documented testing across 11 trustworthy-AI dimensions | MGF §3: human-in / on / out-of-loop spectrum based on risk tier | MGF §4: ongoing monitoring; IMDA GenAI eval sandboxes | MGF §2.2: dataset curation and quality controls | 5 years recommended; PDPC sector rules apply | No statutory AI penalties; PDPA fines up to SGD 1M or 10% annual turnover; sector-specific penalties |
| Brazil — PL 2338/23 (AI Bill, Senate-approved 2024) | 4-tier: prohibited / high / significant / low; sector-specific for public authorities | Right to information about AI use; explanation of automated decisions affecting rights | Algorithmic impact assessment (AIA) for high-risk, registered with ANPD / SIA | Meaningful human review for high-risk decisions affecting rights | Continuous monitoring and incident reporting to regulator | Non-discrimination and data quality obligations; LGPD integration | 5 years; 10 for high-risk | Up to 2% of Brazilian revenue, max BRL 50M per violation; daily fines up to BRL 1M |
| Canada — AIDA (Bill C-27, in parliamentary process) | High-impact systems: scope defined by regulation; general-purpose AI tier under amendments | Plain-language description of high-impact system publicly available | Documentation of design, training data, performance testing | Required measures to prevent biased output and monitor harms | Ongoing monitoring; notification of material harm | Anonymized or de-identified data use requirements | Retention per federal records guidance | Administrative penalties up to CAD 10M or 3% global turnover; criminal offences up to CAD 25M |
| China — Interim GenAI Measures (Aug 2023) + Algorithm Registry | Generative AI: public-facing services regulated; deep-synthesis and algorithmic recommendation covered separately | Explicit labelling of AI-generated content; clear identification to users | Security assessment filing with CAC prior to launch; algorithm filing under Algorithm Registry | Developer accountability for content; takedown on unlawful content | Mandatory reporting of security incidents; content moderation logs | Training data legality, representativeness, and IP compliance | 6 months of user logs minimum; 3 years of safety assessments | Warning, rectification orders, service suspension, fines under Cybersecurity Law and Data Security Law up to RMB 10M or 5% revenue |
Three structural truths emerge:
- Documentation artifacts overlap by 70 to 85 percent. Model cards, data sheets, impact assessments, and monitoring plans satisfy most obligations across jurisdictions with local sections appended.
- Transparency formats are the biggest divergence. Disclosure content is similar; form, audience, timing, and retention differ sharply (NYC LL 144 requires public web posting; EU AI Act Article 50 requires interactive disclosure at the point of use).
- Enforcement posture drives prioritization. Hard-fine jurisdictions (EU, Canada, Brazil, China) dictate baseline rigor. Voluntary frameworks (Singapore MGF, UK White Paper) inform trustworthy-AI dimensions.
Harmonization principles {#principles}
Six principles keep the framework coherent:
1. Highest-common-denominator baseline, never lowest. The baseline satisfies the most demanding obligation per dimension — not the average. EU AI Act Article 11 sets documentation depth; Colorado SB 205 sets consumer-facing explanation paths; the baseline covers both.
2. Overlays, never forks. Jurisdiction-unique obligations (e.g., NYC LL 144’s public bias-audit posting) become overlays on top of the baseline — never replacements.
3. One artifact, many audiences. A single AI system impact assessment is structured so EU AI Act Annex IV reviewers find sections 1 to 13, Colorado reviewers find the impact-assessment section, and Brazil reviewers find the AIA equivalent.
4. Obligation-to-control-to-evidence traceability. Every obligation maps to at least one control; every control produces at least one artifact. Traceability is the auditable backbone.
5. Explicit conflict resolution. Genuine conflicts (e.g., GDPR data minimization vs. LL 144 demographic data collection for bias audit) are documented with precedence-per-jurisdiction rationale defensible to both regulators.
6. Regulatory horizon scanning as a first-class discipline. Draft bills, implementing acts, and enforcement guidance are monitored continuously; updates follow a predictable cadence.
Control library design pattern {#control-library}
The harmonized control library is built in three tiers: a global baseline that every AI system must satisfy, regional overlays keyed to jurisdiction, and tier overlays keyed to risk classification (high-risk, GPAI, etc.).
Tier 1: Global baseline controls
Applied to every AI system in scope, derived from the intersection of NIST AI RMF, ISO/IEC 42001, and the common denominators across the jurisdiction matrix.
| Control ID | Control name | Satisfies (partial list) | Evidence artifact |
|---|---|---|---|
| GB-01 | AI system inventory | EU AI Act Art. 49 registration; M-24-10 inventory; AIDA registry; China algorithm filing | Central AI system registry |
| GB-02 | Purpose specification and intended-use statement | EU AI Act Art. 13; Colorado SB 205; Brazil AIA; Singapore MGF | Model card “intended purpose” section |
| GB-03 | AI system impact assessment (AIIA) | EU AI Act Art. 27 FRIA; Colorado SB 205 impact assessment; Brazil AIA; AIDA assessment | AI impact assessment template |
| GB-04 | Training data provenance and quality | EU AI Act Art. 10; AB 2013; China GenAI Measures; Canada AIDA | Data sheet with provenance, licensing, PII summary |
| GB-05 | Bias and fairness evaluation | NYC LL 144; Colorado SB 205; EU AI Act Art. 10; AIDA; Singapore MGF | Fairness-evaluation report |
| GB-06 | Robustness and accuracy testing | EU AI Act Art. 15; Singapore MGF; UK AISI; Brazil PL 2338 | Testing plan and results |
| GB-07 | Human oversight design | EU AI Act Art. 14; M-24-10; Colorado SB 205; Brazil PL 2338; Singapore MGF | Oversight design specification |
| GB-08 | Transparency and user disclosure | EU AI Act Art. 50; SB 942; Colorado SB 205; China GenAI; Canada AIDA | Disclosure UX specs + notice text |
| GB-09 | Post-market monitoring plan | EU AI Act Art. 72; Brazil PL 2338; AIDA; MGF | Monitoring plan with KPIs and thresholds |
| GB-10 | Incident detection and reporting | EU AI Act Art. 73; AIDA; China GenAI | Incident register + reporting SOP |
| GB-11 | Change management and re-assessment | EU AI Act Art. 43(4); Colorado SB 205 (90-day re-assessment); MGF | Change log with re-assessment triggers |
| GB-12 | Supplier and third-party AI governance | EU AI Act Art. 25 (distributors); AIDA; sector rules | Supplier AI assessment and agreement |
| GB-13 | Evidence retention and audit trail | EU AI Act Art. 18 (10y); Brazil (5-10y); AIDA; PDPA | Retention schedule and WORM storage |
| GB-14 | AI literacy and role competency | EU AI Act Art. 4; ISO 42001 Cl. 7.2 | Training records and competency matrix |
Tier 2: Regional overlay controls
Overlays add only what the baseline does not already satisfy.
| Overlay | Added control | Why needed |
|---|---|---|
| EU | EU-01 Technical file per Annex IV; EU-02 Conformity assessment path selection (self-assessment vs notified body); EU-03 EU database registration (Art. 71); EU-04 Authorized representative for non-EU providers; EU-05 GPAI tier controls (Art. 55) | EU AI Act specifics not in baseline |
| US Federal | US-01 Dual-use model report to Commerce (>10^26 FLOPs); US-02 Federal-use-case inventory; US-03 AI impact assessment per M-24-10 | EO 14110 + M-24-10 specifics |
| California | CA-01 AB 2013 training-data disclosure; CA-02 SB 942 AI provenance watermarking; CA-03 CPPA ADMT alignment | State-specific artifact formats |
| Colorado | CO-01 Pre-decision notice; CO-02 Adverse-decision explanation and appeal; CO-03 Duty-to-accommodate review | SB 205 consumer-facing specifics |
| NYC | NYC-01 Independent bias-audit engagement; NYC-02 Public bias-audit posting; NYC-03 Candidate 10-day notice | LL 144 format and timing |
| UK | UK-01 Sector-regulator engagement log (ICO, FCA, Ofcom, MHRA); UK-02 Frontier-model AISI testing | Cross-sector principle operationalization |
| Singapore | SG-01 AI Verify testing report; SG-02 IMDA sandbox participation for GenAI | MGF toolkit alignment |
| Brazil | BR-01 ANPD / SIA algorithmic impact assessment registration; BR-02 LGPD data-protection integration | PL 2338 registry specifics |
| Canada | CAN-01 Public plain-language description; CAN-02 Harms notification procedure | AIDA specifics |
| China | CN-01 CAC security assessment filing; CN-02 Algorithm registry filing; CN-03 AI content labelling; CN-04 Content moderation SOP | Multi-statute stack specifics |
Tier 3: Risk-tier overlays
| Overlay | Added controls |
|---|---|
| High-risk | Conformity assessment, FRIA, notified-body engagement, enhanced monitoring, 10-year retention |
| GPAI / foundation-model | Systemic-risk evaluation (Art. 55), compute tracking, red-team program, GPAI model card, dual-use report |
| Consumer GenAI | Output labelling, content-moderation pipeline, misuse reporting channel |
| Federal / government use | M-24-10 compliance, public inventory, procurement flow-down |
Every AI system is tagged by jurisdiction-set and risk-tier and inherits baseline + applicable overlays automatically. A model serving EU consumers and NY employers inherits GB-01 to GB-14 + EU overlay + NYC overlay + high-risk overlay. The operating model — not the team — computes the applicable control set.
Operating model impact {#operating-model}
A harmonized framework requires a two-layer governance operating model.
Global AI Council (baseline owner). Chaired by the Chief AI Officer, with legal, privacy, security, risk, engineering, product, and HR representation. Owns the baseline control library, evidence-portfolio architecture, horizon-scan function, and annual framework release. Meets monthly; reports quarterly to the board AI or risk committee.
Regional AI Committees (overlay owners). One per major regulatory cluster (EU, US federal, US state cluster, UK, APAC, LATAM, China). Chaired by a regional compliance or legal lead. Owns local regulator relationships, overlay controls, and local evidence formats. Escalates conflicts and emerging obligations to the Global Council.
Supporting structure:
- Regulatory horizon-scan team (2 to 4 FTE or external counsel network) — tracks draft bills, enforcement actions, implementing acts; publishes monthly intelligence brief.
- Evidence-portfolio office — maintains templates, the artifact registry, and traceability between obligations, controls, and evidence.
- Model-risk and assurance team — runs technical controls (bias testing, red-teaming, monitoring) whose outputs become evidence.
- AI ethics advisory (independent) — reviews the framework annually and any contested high-impact system before deployment.
A regional regulator inquiry is handled by the Regional Committee, with escalation to Global only if a baseline change is implied. Baseline policy changes are decided by Global with consultation from all regions. This discipline prevents fragmentation while keeping regulatory responsiveness local.
COMPEL stage mapping {#compel-mapping}
Harmonization maps naturally to COMPEL’s six stages:
| COMPEL stage | Harmonization activity |
|---|---|
| Calibrate | Inventory AI systems; classify per jurisdiction + risk tier; baseline-and-overlay applicability matrix |
| Organize | Stand up Global Council and Regional Committees; publish policy; deploy baseline controls; staff horizon-scan and evidence-portfolio office |
| Model | Design control library, control-to-obligation traceability, evidence-artifact templates; configure model cards, data sheets, impact assessments |
| Produce | Execute controls on each AI system; generate evidence; register in EU database, ANPD, CAC, algorithm registries |
| Evaluate | Run fairness, robustness, monitoring evaluations; conduct internal audits; engage notified bodies and independent bias auditors |
| Learn | Update framework from enforcement actions, audit findings, regulator feedback; issue quarterly release; retrain teams |
Evidence artifacts {#evidence}
A harmonized evidence portfolio includes, at minimum:
- AI system registry (tenant-scoped, jurisdiction- and risk-tier-tagged)
- AI policy (global, board-approved) and control library with traceability matrix
- Model card and data sheet per system (with jurisdiction-specific sections)
- AI system impact assessment per system (serves EU FRIA, Colorado IA, Brazil AIA, AIDA)
- Fairness / bias evaluation report (with LL 144 independent-audit section where applicable)
- Robustness, accuracy, and security testing report
- Human oversight design specification
- Transparency / disclosure UX specs and notice texts
- Post-market monitoring plan with KPIs and thresholds
- Incident register with per-jurisdiction reporting timelines
- Change log with re-assessment triggers
- Supplier AI assessments and agreements
- Registry submissions (EU database, ANPD / SIA, CAC filing, Algorithm Registry, NYC posting)
- Retention schedule and WORM-stored audit trail
- Horizon-scan brief (monthly) and framework release notes (quarterly)
- Training and competency records
Every artifact is structured so reviewers from any in-scope regulator find the sections they expect.
Metrics {#metrics}
A harmonized program reports on these core metrics:
- Coverage: percentage of in-scope AI systems with complete baseline + applicable overlay evidence
- Controls in green: percentage of applicable controls passing most recent assessment
- Obligation trace completeness: percentage of in-scope obligations mapped to at least one control
- Framework currency lag: median days from regulatory event to baseline or overlay update
- Incident MTTR per jurisdiction: median time from detection to regulator notification, within jurisdiction window (EU AI Act: 15 days for serious incidents)
- Audit-finding cycle time: median days to close findings from notified body, CAC, ANPD, or independent auditor
- Evidence reuse rate: regulator audits satisfied per artifact (target > 3)
- Cost per governed AI system: total program cost divided by number of governed systems, trended quarterly
Targets are set at launch and revisited annually during framework release.
Risks if skipped {#risks}
Enterprises that comply without a harmonization framework consistently experience:
- Parallel-program tax: 2.5 to 4x the cost of a harmonized program per in-scope AI system, driven by artifact duplication and rework
- Enforcement surprises: fines compound across jurisdictions (EU AI Act Art. 99 up to 7% global turnover; Canada AIDA up to 3%; Brazil PL 2338 up to 2% Brazilian revenue)
- Control drift: a finding closed in one jurisdiction opens a gap in another
- Board-reporting incoherence: ten AI risk views equal no view; the board cannot form a defensible position
- Market lockout: missed EU database registration or NYC posting blocks deployment until cured
- Reputational concentration: a single public enforcement action becomes an all-markets reputational event
- Talent attrition: compliance engineers burn out in fragmented programs; institutional knowledge walks out
A harmonization framework is not optional at enterprise scale. It is the operating model that makes multi-jurisdictional AI compliance economically and organizationally sustainable.
Related standards and references {#references}
- EU AI Act (Regulation 2024/1689) — eur-lex.europa.eu. Articles 6, 10, 11, 13, 14, 15, 18, 27, 49, 50, 55, 71, 72, 73, 99.
- US Executive Order 14110 — whitehouse.gov. §4 dual-use model reporting.
- OMB Memorandum M-24-10 — whitehouse.gov/omb. Federal-agency AI governance.
- California AB 2013 — leginfo.legislature.ca.gov. Generative AI training-data transparency.
- California SB 942 — AI Transparency Act (provenance).
- Colorado AI Act (SB 24-205) — leg.colorado.gov. Consequential-decision obligations, effective Feb 2026.
- NYC Local Law 144 — rules.cityofnewyork.us. AEDT bias-audit rule.
- UK AI Regulation White Paper (2023) — gov.uk. Five cross-sector principles.
- Singapore Model AI Governance Framework — pdpc.gov.sg. Plus 2024 Model AI Governance Framework for GenAI.
- Brazil PL 2338/23 — senado.leg.br. AI Bill passed by Senate in December 2024.
- Canada AIDA (Bill C-27) — parl.ca. Artificial Intelligence and Data Act.
- China Interim Measures for GenAI Services (Aug 2023) — cac.gov.cn. Plus Algorithm Registry and Deep Synthesis Provisions.
- NIST AI RMF 1.0 — nist.gov/itl/ai-risk-management-framework.
- ISO/IEC 42001:2023 — iso.org/standard/81230.html.
- OECD AI Principles (2019 / 2024 update) — oecd.org/going-digital/ai/principles. Reference vocabulary across most regulators.
Related COMPEL articles
- Building a Multi-Jurisdictional AI Governance Operating Model
- The Geopolitical Landscape of AI Governance
- Building EU AI Act Evidence Portfolios
- NIST AI RMF to ISO 42001 Crosswalk
- Enterprise Multi-Framework Compliance Strategy
How to cite
COMPEL FlowRidge Team. (2026). “AI Regulatory Harmonization Framework: One Control Library, Many Jurisdictions.” COMPEL Framework by FlowRidge. https://www.compelframework.org/articles/seo-a3-ai-regulatory-harmonization-framework/