Skip to main content
AITGP M9.1-Art03 v1.0 Reviewed 2026-04-06 Open Access
M9.1 M9.1
AITGP · Governance Professional

AI Regulatory Harmonization Framework: One Control Library, Many Jurisdictions

AI Regulatory Harmonization Framework: One Control Library, Many Jurisdictions — Transformation Design & Program Architecture — Advanced depth — COMPEL Body of Knowledge.

18 min read Article 3 of 4

COMPEL Body of Knowledge — Regulatory Bridge Series Cluster A Flagship Article — Multi-Jurisdictional Harmonization


Why harmonization matters {#why}

An enterprise that ships AI-enabled products or services across more than two jurisdictions is already operating inside a regulatory patchwork. A consumer-lending model deployed in the EU triggers EU AI Act Annex III high-risk obligations. Offered to NYC employers, it triggers Local Law 144’s AEDT bias-audit and candidate-notice rules. Scored against Colorado employees, it triggers Colorado SB 205 duty-to-accommodate and impact-assessment obligations. Retrained on EU data and shipped to Brazil, it triggers PL 2338/23 registration. The foundation model underneath all of these triggers the EU AI Act Article 55 GPAI obligations, the US EO 14110 reporting threshold, and China’s Interim GenAI Measures if any output reaches users in China.

Running ten parallel compliance programs produces five predictable failures:

  1. Evidence duplication at unsustainable cost. A single impact assessment is written ten different ways, once per regulator. Documentation teams spend 40 to 60 percent of cycles reformatting rather than improving controls.
  2. Contradictory implementation. Regulation A says “disclose bias audit results publicly.” Regulation B says “protect audit results as confidential business information.” Teams resolve these with ad-hoc workarounds that neither auditor accepts.
  3. Operating-model fragmentation. Risk registers, incident logs, and model inventories fork per jurisdiction. The board sees ten different AI risk views and cannot form a single picture.
  4. Stale controls. Every regulation evolves. Without a shared baseline, every update triggers a ten-program change cycle.
  5. Market-access failure. Missed deadlines block revenue. Enforcement fines compound (EU AI Act Article 99 alone carries fines up to EUR 35M or 7% global turnover).

Harmonization solves these failures by designing the compliance operating model from first principles around a single control library, a single evidence portfolio, and a single governance operating model. Regulator-specific deltas become overlays, not forks.

Jurisdiction comparison matrix {#jurisdictions}

The table below maps ten major jurisdictions against eight obligation types. Read it horizontally to see what a given regulator expects; read it vertically to see how one obligation varies across regulators. The matrix is the starting point for designing the control library in the next section.

Jurisdiction / RegulationRisk classificationTransparency to usersTechnical documentationHuman oversightPost-market monitoringData governanceEvidence retentionMaximum fines
EU AI Act (Reg 2024/1689)4-tier: unacceptable / high (Annex III) / limited / minimal + separate GPAI tierArt. 50: AI-interaction disclosure, deepfake labelling, emotion-recognition noticeArt. 11 + Annex IV: full technical file, 13 mandated sectionsArt. 14: design for effective oversight, right to intervene, clear interfaceArt. 72: post-market monitoring plan, serious incident reporting within 15 daysArt. 10: training / validation / testing datasets, bias mitigation, documented provenanceArt. 18: 10 years after last placement on marketEUR 35M or 7% global turnover (prohibited practices); EUR 15M or 3% (high-risk breaches)
US Federal — EO 14110 + OMB M-24-10Purpose-based: rights-impacting / safety-impacting per M-24-10; 10^26 FLOPs GPAI thresholdM-24-10: public inventory of federal AI use cases, notices to affected individualsDual-use model reports to Commerce; AI impact assessments per M-24-10M-24-10: human in loop for rights-impacting uses in federal agenciesM-24-10: ongoing monitoring with documented metricsEO 14110 §10.1(b): data provenance for synthetic contentAgency records schedules (typically 3 to 7 years); indefinite for safety-criticalNo direct civil penalty (pre-federal-legislation); procurement disqualification and OIG findings
California — AB 2013 + SB 1047 residualsAB 2013: GenAI training-data disclosure for any model; SB 942 / successor: provenance watermarkingSB 942: AI-provenance disclosures for generative output; AB 2013: training-data summariesAB 2013: high-level training-data documentation (sources, licensing, PII handling)Sector-specific (insurance, credit, healthcare)Under development in CPPA ADMT regulationsAB 2013: dataset provenance and PII summaries mandatory5 years minimum; CPPA ADMT proposes 7Civil penalties up to USD 25,000 per violation (AB 2013); higher under sectoral laws
Colorado AI Act (SB 205, eff. Feb 2026)Consequential decisions: employment, education, finance, housing, essential services, government, healthcare, insurance, legalPre-decision notice to consumers; post-decision explanation and appeal pathAnnual impact assessment per high-risk AI; developer documentation to deployersReasonable care duty; documented review of adverse decisionsImpact assessment updated within 90 days of material changeBias-risk assessment across protected classes3 years minimum for impact assessmentsAttorney General enforcement; CUPA penalties up to USD 20,000 per violation
New York City — LL 144 (AEDT)Scope: automated employment decision tools used for hiring or promotionCandidate notice 10 business days before use; data-type disclosurePublished summary of independent bias auditNot mandated directly (focus is bias, not oversight design)Bias audit must be re-run at least annuallyCategories of data used must be disclosedBias audit results posted publicly for 6 months minimumCivil penalty USD 500 first violation, up to USD 1,500 subsequent; per-candidate per-day
UK — AI Regulation White Paper + AI Bill (draft)Context-based, 5 cross-sector principles (safety, transparency, fairness, accountability, contestability); regulator-ledPrinciple 2: transparency appropriate to context; ICO / Ofcom / FCA issue sector rulesPrinciple 4: documented accountability; sector regulators set artifact requirementsPrinciple 5: contestability and redress; human review for high-impact automated decisionsSector-regulator driven; AISI testing for frontier modelsPrinciple 3: fairness including dataset bias assessmentSector-specific (FCA 5 years, ICO 6 years for DPIAs)Currently via existing regulators: ICO up to GBP 17.5M or 4% turnover; draft AI Bill proposes dedicated regime
Singapore — Model AI Governance Framework (MGF) + GenAI FrameworkVoluntary, risk-based matrix: severity × probability across 4 tiersMGF: explicit disclosure when AI is material to decision; veracity labels for GenAIAI Verify Foundation toolkit: documented testing across 11 trustworthy-AI dimensionsMGF §3: human-in / on / out-of-loop spectrum based on risk tierMGF §4: ongoing monitoring; IMDA GenAI eval sandboxesMGF §2.2: dataset curation and quality controls5 years recommended; PDPC sector rules applyNo statutory AI penalties; PDPA fines up to SGD 1M or 10% annual turnover; sector-specific penalties
Brazil — PL 2338/23 (AI Bill, Senate-approved 2024)4-tier: prohibited / high / significant / low; sector-specific for public authoritiesRight to information about AI use; explanation of automated decisions affecting rightsAlgorithmic impact assessment (AIA) for high-risk, registered with ANPD / SIAMeaningful human review for high-risk decisions affecting rightsContinuous monitoring and incident reporting to regulatorNon-discrimination and data quality obligations; LGPD integration5 years; 10 for high-riskUp to 2% of Brazilian revenue, max BRL 50M per violation; daily fines up to BRL 1M
Canada — AIDA (Bill C-27, in parliamentary process)High-impact systems: scope defined by regulation; general-purpose AI tier under amendmentsPlain-language description of high-impact system publicly availableDocumentation of design, training data, performance testingRequired measures to prevent biased output and monitor harmsOngoing monitoring; notification of material harmAnonymized or de-identified data use requirementsRetention per federal records guidanceAdministrative penalties up to CAD 10M or 3% global turnover; criminal offences up to CAD 25M
China — Interim GenAI Measures (Aug 2023) + Algorithm RegistryGenerative AI: public-facing services regulated; deep-synthesis and algorithmic recommendation covered separatelyExplicit labelling of AI-generated content; clear identification to usersSecurity assessment filing with CAC prior to launch; algorithm filing under Algorithm RegistryDeveloper accountability for content; takedown on unlawful contentMandatory reporting of security incidents; content moderation logsTraining data legality, representativeness, and IP compliance6 months of user logs minimum; 3 years of safety assessmentsWarning, rectification orders, service suspension, fines under Cybersecurity Law and Data Security Law up to RMB 10M or 5% revenue

Three structural truths emerge:

  • Documentation artifacts overlap by 70 to 85 percent. Model cards, data sheets, impact assessments, and monitoring plans satisfy most obligations across jurisdictions with local sections appended.
  • Transparency formats are the biggest divergence. Disclosure content is similar; form, audience, timing, and retention differ sharply (NYC LL 144 requires public web posting; EU AI Act Article 50 requires interactive disclosure at the point of use).
  • Enforcement posture drives prioritization. Hard-fine jurisdictions (EU, Canada, Brazil, China) dictate baseline rigor. Voluntary frameworks (Singapore MGF, UK White Paper) inform trustworthy-AI dimensions.

Harmonization principles {#principles}

Six principles keep the framework coherent:

1. Highest-common-denominator baseline, never lowest. The baseline satisfies the most demanding obligation per dimension — not the average. EU AI Act Article 11 sets documentation depth; Colorado SB 205 sets consumer-facing explanation paths; the baseline covers both.

2. Overlays, never forks. Jurisdiction-unique obligations (e.g., NYC LL 144’s public bias-audit posting) become overlays on top of the baseline — never replacements.

3. One artifact, many audiences. A single AI system impact assessment is structured so EU AI Act Annex IV reviewers find sections 1 to 13, Colorado reviewers find the impact-assessment section, and Brazil reviewers find the AIA equivalent.

4. Obligation-to-control-to-evidence traceability. Every obligation maps to at least one control; every control produces at least one artifact. Traceability is the auditable backbone.

5. Explicit conflict resolution. Genuine conflicts (e.g., GDPR data minimization vs. LL 144 demographic data collection for bias audit) are documented with precedence-per-jurisdiction rationale defensible to both regulators.

6. Regulatory horizon scanning as a first-class discipline. Draft bills, implementing acts, and enforcement guidance are monitored continuously; updates follow a predictable cadence.

Control library design pattern {#control-library}

The harmonized control library is built in three tiers: a global baseline that every AI system must satisfy, regional overlays keyed to jurisdiction, and tier overlays keyed to risk classification (high-risk, GPAI, etc.).

Tier 1: Global baseline controls

Applied to every AI system in scope, derived from the intersection of NIST AI RMF, ISO/IEC 42001, and the common denominators across the jurisdiction matrix.

Control IDControl nameSatisfies (partial list)Evidence artifact
GB-01AI system inventoryEU AI Act Art. 49 registration; M-24-10 inventory; AIDA registry; China algorithm filingCentral AI system registry
GB-02Purpose specification and intended-use statementEU AI Act Art. 13; Colorado SB 205; Brazil AIA; Singapore MGFModel card “intended purpose” section
GB-03AI system impact assessment (AIIA)EU AI Act Art. 27 FRIA; Colorado SB 205 impact assessment; Brazil AIA; AIDA assessmentAI impact assessment template
GB-04Training data provenance and qualityEU AI Act Art. 10; AB 2013; China GenAI Measures; Canada AIDAData sheet with provenance, licensing, PII summary
GB-05Bias and fairness evaluationNYC LL 144; Colorado SB 205; EU AI Act Art. 10; AIDA; Singapore MGFFairness-evaluation report
GB-06Robustness and accuracy testingEU AI Act Art. 15; Singapore MGF; UK AISI; Brazil PL 2338Testing plan and results
GB-07Human oversight designEU AI Act Art. 14; M-24-10; Colorado SB 205; Brazil PL 2338; Singapore MGFOversight design specification
GB-08Transparency and user disclosureEU AI Act Art. 50; SB 942; Colorado SB 205; China GenAI; Canada AIDADisclosure UX specs + notice text
GB-09Post-market monitoring planEU AI Act Art. 72; Brazil PL 2338; AIDA; MGFMonitoring plan with KPIs and thresholds
GB-10Incident detection and reportingEU AI Act Art. 73; AIDA; China GenAIIncident register + reporting SOP
GB-11Change management and re-assessmentEU AI Act Art. 43(4); Colorado SB 205 (90-day re-assessment); MGFChange log with re-assessment triggers
GB-12Supplier and third-party AI governanceEU AI Act Art. 25 (distributors); AIDA; sector rulesSupplier AI assessment and agreement
GB-13Evidence retention and audit trailEU AI Act Art. 18 (10y); Brazil (5-10y); AIDA; PDPARetention schedule and WORM storage
GB-14AI literacy and role competencyEU AI Act Art. 4; ISO 42001 Cl. 7.2Training records and competency matrix

Tier 2: Regional overlay controls

Overlays add only what the baseline does not already satisfy.

OverlayAdded controlWhy needed
EUEU-01 Technical file per Annex IV; EU-02 Conformity assessment path selection (self-assessment vs notified body); EU-03 EU database registration (Art. 71); EU-04 Authorized representative for non-EU providers; EU-05 GPAI tier controls (Art. 55)EU AI Act specifics not in baseline
US FederalUS-01 Dual-use model report to Commerce (>10^26 FLOPs); US-02 Federal-use-case inventory; US-03 AI impact assessment per M-24-10EO 14110 + M-24-10 specifics
CaliforniaCA-01 AB 2013 training-data disclosure; CA-02 SB 942 AI provenance watermarking; CA-03 CPPA ADMT alignmentState-specific artifact formats
ColoradoCO-01 Pre-decision notice; CO-02 Adverse-decision explanation and appeal; CO-03 Duty-to-accommodate reviewSB 205 consumer-facing specifics
NYCNYC-01 Independent bias-audit engagement; NYC-02 Public bias-audit posting; NYC-03 Candidate 10-day noticeLL 144 format and timing
UKUK-01 Sector-regulator engagement log (ICO, FCA, Ofcom, MHRA); UK-02 Frontier-model AISI testingCross-sector principle operationalization
SingaporeSG-01 AI Verify testing report; SG-02 IMDA sandbox participation for GenAIMGF toolkit alignment
BrazilBR-01 ANPD / SIA algorithmic impact assessment registration; BR-02 LGPD data-protection integrationPL 2338 registry specifics
CanadaCAN-01 Public plain-language description; CAN-02 Harms notification procedureAIDA specifics
ChinaCN-01 CAC security assessment filing; CN-02 Algorithm registry filing; CN-03 AI content labelling; CN-04 Content moderation SOPMulti-statute stack specifics

Tier 3: Risk-tier overlays

OverlayAdded controls
High-riskConformity assessment, FRIA, notified-body engagement, enhanced monitoring, 10-year retention
GPAI / foundation-modelSystemic-risk evaluation (Art. 55), compute tracking, red-team program, GPAI model card, dual-use report
Consumer GenAIOutput labelling, content-moderation pipeline, misuse reporting channel
Federal / government useM-24-10 compliance, public inventory, procurement flow-down

Every AI system is tagged by jurisdiction-set and risk-tier and inherits baseline + applicable overlays automatically. A model serving EU consumers and NY employers inherits GB-01 to GB-14 + EU overlay + NYC overlay + high-risk overlay. The operating model — not the team — computes the applicable control set.

Operating model impact {#operating-model}

A harmonized framework requires a two-layer governance operating model.

Global AI Council (baseline owner). Chaired by the Chief AI Officer, with legal, privacy, security, risk, engineering, product, and HR representation. Owns the baseline control library, evidence-portfolio architecture, horizon-scan function, and annual framework release. Meets monthly; reports quarterly to the board AI or risk committee.

Regional AI Committees (overlay owners). One per major regulatory cluster (EU, US federal, US state cluster, UK, APAC, LATAM, China). Chaired by a regional compliance or legal lead. Owns local regulator relationships, overlay controls, and local evidence formats. Escalates conflicts and emerging obligations to the Global Council.

Supporting structure:

  • Regulatory horizon-scan team (2 to 4 FTE or external counsel network) — tracks draft bills, enforcement actions, implementing acts; publishes monthly intelligence brief.
  • Evidence-portfolio office — maintains templates, the artifact registry, and traceability between obligations, controls, and evidence.
  • Model-risk and assurance team — runs technical controls (bias testing, red-teaming, monitoring) whose outputs become evidence.
  • AI ethics advisory (independent) — reviews the framework annually and any contested high-impact system before deployment.

A regional regulator inquiry is handled by the Regional Committee, with escalation to Global only if a baseline change is implied. Baseline policy changes are decided by Global with consultation from all regions. This discipline prevents fragmentation while keeping regulatory responsiveness local.

COMPEL stage mapping {#compel-mapping}

Harmonization maps naturally to COMPEL’s six stages:

COMPEL stageHarmonization activity
CalibrateInventory AI systems; classify per jurisdiction + risk tier; baseline-and-overlay applicability matrix
OrganizeStand up Global Council and Regional Committees; publish policy; deploy baseline controls; staff horizon-scan and evidence-portfolio office
ModelDesign control library, control-to-obligation traceability, evidence-artifact templates; configure model cards, data sheets, impact assessments
ProduceExecute controls on each AI system; generate evidence; register in EU database, ANPD, CAC, algorithm registries
EvaluateRun fairness, robustness, monitoring evaluations; conduct internal audits; engage notified bodies and independent bias auditors
LearnUpdate framework from enforcement actions, audit findings, regulator feedback; issue quarterly release; retrain teams

Evidence artifacts {#evidence}

A harmonized evidence portfolio includes, at minimum:

  • AI system registry (tenant-scoped, jurisdiction- and risk-tier-tagged)
  • AI policy (global, board-approved) and control library with traceability matrix
  • Model card and data sheet per system (with jurisdiction-specific sections)
  • AI system impact assessment per system (serves EU FRIA, Colorado IA, Brazil AIA, AIDA)
  • Fairness / bias evaluation report (with LL 144 independent-audit section where applicable)
  • Robustness, accuracy, and security testing report
  • Human oversight design specification
  • Transparency / disclosure UX specs and notice texts
  • Post-market monitoring plan with KPIs and thresholds
  • Incident register with per-jurisdiction reporting timelines
  • Change log with re-assessment triggers
  • Supplier AI assessments and agreements
  • Registry submissions (EU database, ANPD / SIA, CAC filing, Algorithm Registry, NYC posting)
  • Retention schedule and WORM-stored audit trail
  • Horizon-scan brief (monthly) and framework release notes (quarterly)
  • Training and competency records

Every artifact is structured so reviewers from any in-scope regulator find the sections they expect.

Metrics {#metrics}

A harmonized program reports on these core metrics:

  • Coverage: percentage of in-scope AI systems with complete baseline + applicable overlay evidence
  • Controls in green: percentage of applicable controls passing most recent assessment
  • Obligation trace completeness: percentage of in-scope obligations mapped to at least one control
  • Framework currency lag: median days from regulatory event to baseline or overlay update
  • Incident MTTR per jurisdiction: median time from detection to regulator notification, within jurisdiction window (EU AI Act: 15 days for serious incidents)
  • Audit-finding cycle time: median days to close findings from notified body, CAC, ANPD, or independent auditor
  • Evidence reuse rate: regulator audits satisfied per artifact (target > 3)
  • Cost per governed AI system: total program cost divided by number of governed systems, trended quarterly

Targets are set at launch and revisited annually during framework release.

Risks if skipped {#risks}

Enterprises that comply without a harmonization framework consistently experience:

  • Parallel-program tax: 2.5 to 4x the cost of a harmonized program per in-scope AI system, driven by artifact duplication and rework
  • Enforcement surprises: fines compound across jurisdictions (EU AI Act Art. 99 up to 7% global turnover; Canada AIDA up to 3%; Brazil PL 2338 up to 2% Brazilian revenue)
  • Control drift: a finding closed in one jurisdiction opens a gap in another
  • Board-reporting incoherence: ten AI risk views equal no view; the board cannot form a defensible position
  • Market lockout: missed EU database registration or NYC posting blocks deployment until cured
  • Reputational concentration: a single public enforcement action becomes an all-markets reputational event
  • Talent attrition: compliance engineers burn out in fragmented programs; institutional knowledge walks out

A harmonization framework is not optional at enterprise scale. It is the operating model that makes multi-jurisdictional AI compliance economically and organizationally sustainable.

How to cite

COMPEL FlowRidge Team. (2026). “AI Regulatory Harmonization Framework: One Control Library, Many Jurisdictions.” COMPEL Framework by FlowRidge. https://www.compelframework.org/articles/seo-a3-ai-regulatory-harmonization-framework/

Frequently Asked Questions

Why not just comply with the strictest regulation and be done with it?
Because "strictest" is dimension-specific. The EU AI Act has the heaviest documentation burden, but Colorado SB 205 has broader duty-to-accommodate obligations, New York LL 144 has the most specific bias-audit publication rule, and China's GenAI Measures have the most restrictive training-data provenance requirements. A single "highest common denominator" policy would either violate one regulation or impose unworkable cost. Harmonization uses a global baseline plus jurisdiction-specific overlays.
What is the difference between harmonization and a compliance matrix?
A compliance matrix lists obligations side by side and leaves the operating model fragmented. Harmonization goes further: it designs one control library where each global control satisfies multiple regulations simultaneously, and overlay controls handle only jurisdiction-specific deltas. One evidence artifact serves many auditors.
Does harmonization work for GPAI and foundation models?
Yes, with adjustments. GPAI obligations under EU AI Act Article 55, the US EO 14110 dual-use model reporting, and China's algorithm registry each trigger at different compute and capability thresholds. The harmonized control library includes a GPAI overlay with compute-tracking, red-team, and systemic-risk evaluation controls that satisfy all three triggers.
Who owns the harmonization framework inside the organization?
A global AI council (chaired by Chief AI Officer or equivalent) owns the baseline. Regional AI committees own jurisdiction overlays and local regulator relationships. The chief compliance officer or general counsel signs off on the framework annually and approves material changes.
How do we keep the framework current as regulations evolve?
Establish a regulatory horizon-scan function that tracks draft bills, implementing acts, and enforcement guidance across all in-scope jurisdictions. Release framework updates on a predictable cadence (typically quarterly, with out-of-cycle updates for major events like the EU AI Act Article 6 delegated acts or US federal AI legislation).