This article maps the major approaches to AI governance worldwide, introduces the concept of sovereign AI, and equips foundations-level practitioners with the awareness they need to navigate a world where AI governance is both a regulatory challenge and a geopolitical reality.
Three Philosophies of AI Governance
The global AI governance landscape can be understood through three dominant philosophical approaches, each reflecting the political and economic priorities of its origin:
The European Approach: Rights-Based Regulation
The European Union has established itself as the global standard-setter for AI regulation through the EU AI Act (Regulation 2024/1689), the world’s first comprehensive AI-specific legislation. The EU approach is grounded in the protection of fundamental rights — human dignity, non-discrimination, privacy, and democratic participation.
The EU AI Act classifies AI systems by risk level: prohibited practices (social scoring, certain biometric systems), high-risk systems (healthcare, employment, critical infrastructure, law enforcement), limited risk (transparency obligations for chatbots and deepfakes), and minimal risk (unregulated). This risk-based classification framework has become a reference model globally.
Key characteristics of the European approach include strong enforcement mechanisms with penalties up to 7% of global annual revenue, mandatory conformity assessments for high-risk systems, a precautionary stance that requires proof of safety before deployment, and the extraterritorial reach that applies the regulation to any AI system placed on the EU market regardless of where the provider is established.
The EU approach has been criticised by some as potentially stifling innovation. Its supporters argue that clear rules create market certainty and that the EU is defining the terms on which the global AI market will operate — the “Brussels Effect” whereby EU regulation becomes the de facto global standard because multinational companies find it more efficient to comply globally than to maintain separate approaches.
The American Approach: Sector-Specific and Innovation-Oriented
The United States has deliberately avoided comprehensive federal AI legislation, instead relying on a patchwork of sector-specific regulation, voluntary frameworks, and executive action. The NIST AI Risk Management Framework provides voluntary guidance. Executive Order 14110 on Safe, Secure, and Trustworthy AI establishes reporting requirements for advanced AI development. Existing agencies — the FTC, EEOC, FDA, SEC, CFPB — apply their sector-specific authority to AI within their domains.
This approach reflects the American philosophy that innovation should not be constrained by pre-emptive regulation and that existing legal frameworks (consumer protection, anti-discrimination, product liability) are adaptable to AI challenges. The result is significant regulatory flexibility but also uncertainty — companies must navigate guidance from multiple agencies, state-level legislation (Colorado AI Act, NYC Local Law 144, California’s AI training data transparency requirements), and the ever-present risk of FTC enforcement action.
The American approach is evolving rapidly. The Colorado AI Act, which took effect in February 2026, represents the first comprehensive state-level AI regulation and may catalyse further state action. The question of whether a federal AI law will emerge remains open.
The Asian Approaches: Diverse and Pragmatic
Asia presents no single approach but rather a spectrum of governance philosophies:
China has moved aggressively to regulate specific AI applications through a series of targeted regulations: the Algorithm Recommendation Management Provisions (2022), the Deep Synthesis Management Provisions (deepfakes, 2023), and the Interim Measures for Generative AI Services (2023). China’s approach is notable for its speed of implementation, its content alignment requirements (AI outputs must align with socialist core values), and its algorithm registration system requiring disclosure of algorithmic logic to regulators.
Singapore exemplifies the voluntary, industry-partnership approach through its Model AI Governance Framework and the AI Verify testing toolkit — an open-source tool for organisations to demonstrate their AI governance practices. Singapore’s pragmatic stance positions it as a trusted jurisdiction for AI development while maintaining governance standards.
Japan has adopted a principles-based, voluntary approach centred on its Social Principles of Human-centric AI, while playing a significant role in international AI governance as the host of the 2023 Hiroshima AI Process that produced the first international code of conduct for advanced AI.
India is developing its framework through the Digital Personal Data Protection Act (2023) and NITI Aayog’s Responsible AI Principles, with a focus on balancing AI adoption for economic development with governance safeguards. The RBI has been particularly active in regulating AI in financial services.
The Emergence of Sovereign AI
Sovereign AI is the concept that nations and regions should develop and control their own AI capabilities rather than depending on foreign technology providers. It is driven by three converging forces:
National security. AI capabilities are increasingly viewed as strategic national assets. The ability to develop, deploy, and control AI systems — particularly in defence, intelligence, and critical infrastructure — is considered essential for national sovereignty.
Economic competitiveness. Nations that develop domestic AI capability capture economic value, create high-skilled jobs, and reduce dependency on foreign technology platforms. The semiconductor supply chain concentration (primarily in Taiwan, South Korea, and the Netherlands) has heightened awareness of strategic dependency.
Regulatory control. Nations that rely entirely on foreign AI systems face a governance challenge: how to regulate systems they did not develop, cannot inspect, and may not understand. Sovereign AI capability provides the foundation for effective regulatory oversight.
Sovereign AI manifests in several dimensions:
Data sovereignty — ensuring that the data used for AI training and inference remains under national or organisational control, subject to domestic law and governance policies.
Compute sovereignty — reducing dependency on foreign-controlled cloud infrastructure and semiconductor supply chains for AI workloads.
Model sovereignty — the ability to develop, inspect, modify, and replace AI models without dependency on foreign proprietary systems.
Talent sovereignty — developing domestic AI expertise rather than relying on imported skills.
For enterprises, sovereign AI creates both challenges (multiple, potentially conflicting jurisdictional requirements) and opportunities (trusted partner status in markets that value sovereignty alignment).
What This Means for Practitioners
The geopolitical landscape of AI governance has practical implications for every organisation deploying AI:
Compliance complexity is growing. An organisation operating AI systems in the EU, US, China, and Singapore faces fundamentally different regulatory expectations — from mandatory conformity assessments to voluntary self-governance, from algorithm registration to innovation sandboxes. The trend is toward more regulation, not less, and toward more jurisdictions enacting AI-specific rules.
The “Brussels Effect” is real but incomplete. Many multinational organisations are adopting EU AI Act compliance as their global baseline, reasoning that the most stringent standard will satisfy less stringent jurisdictions. This approach works for many requirements but fails where jurisdictions have conflicting demands — for example, China’s content alignment requirements and algorithm filing obligations have no equivalent in EU or US law.
Data flows are the pressure point. Data localisation requirements — rules about where data can be stored and processed — directly affect AI training and inference pipelines. An AI model trained on data from multiple jurisdictions may face simultaneous requirements that the data remain in each jurisdiction. Privacy-enhancing technologies (federated learning, differential privacy, synthetic data) offer partial solutions but add complexity and cost.
Regulatory horizon scanning is essential. The AI regulatory landscape is changing faster than any other technology governance domain. New regulations, enforcement actions, and judicial decisions emerge monthly. Organisations need a systematic process for tracking these changes and assessing their impact.
Geopolitical risk affects AI strategy. Export controls on semiconductor technology, sanctions regimes, and trade disputes can disrupt AI supply chains — from GPU availability to cloud service access to model provider relationships. AI strategy must account for geopolitical scenarios that could limit access to key resources.
Building a Multi-Jurisdictional Governance Posture
At the foundations level, practitioners should understand the principles of multi-jurisdictional AI governance:
Map your footprint. Know where your AI systems are developed, trained, deployed, and where the data they process originates and resides. This mapping is the prerequisite for all compliance activity.
Understand the philosophies, not just the rules. Rules change; philosophies evolve more slowly. Understanding why a jurisdiction regulates AI the way it does helps practitioners anticipate future regulatory direction and design adaptable governance programmes.
Design for the highest common denominator where possible. Implementing controls that satisfy the most stringent applicable requirement reduces duplication. But be aware of genuine conflicts that prevent a one-size-fits-all approach.
Invest in regulatory intelligence. Whether through internal expertise, external counsel, or industry association participation, maintaining current understanding of the AI regulatory landscape across operating jurisdictions is a core governance capability.
Engage with regulators. The AI governance landscape is being shaped now. Organisations that engage constructively with regulators — through consultations, sandboxes, and industry dialogue — can contribute to practical, effective regulation while building trusted relationships.
Subsequent articles and advanced certification modules provide detailed guidance on multi-jurisdictional compliance methodology, data localisation impact assessment, sovereign AI readiness assessment, and geopolitical AI strategy for global enterprises.
This article is part of the COMPEL Body of Knowledge v2.5 and supports the AI Transformation Foundations (AITF) certification.