Skip to main content
AITF M1.5-Art13 v1.0 Reviewed 2026-04-06 Open Access
M1.5 Governance, Risk, and Compliance for AI
AITF · Foundations

Understanding the EU AI Act: Foundations for Governance

Understanding the EU AI Act: Foundations for Governance — AI Governance & Compliance — Foundation depth — COMPEL Body of Knowledge.

15 min read Article 13 of 17

This article provides a foundations-level introduction to the EU AI Act, explaining what the regulation covers, who it applies to, how its risk categories work, and what timeline organisations face. It is written for practitioners who may be encountering the regulation for the first time and need a clear, accurate orientation before engaging with the more detailed articles at the Practitioner and Governance Professional levels.

Why the EU AI Act Matters Beyond the EU

The significance of the EU AI Act extends far beyond the borders of the European Union. Three factors make this regulation globally relevant.

Extraterritorial Scope

The EU AI Act applies not only to providers and deployers established within the EU, but also to providers and deployers in third countries where the output of the AI system is used within the EU (Article 2(1)). This means that a company headquartered in the United States, Japan, or Singapore that deploys an AI system whose outputs affect EU residents must comply with the regulation. The practical implication is that any organisation with global operations or global customers needs to assess its EU AI Act exposure regardless of where it is headquartered.

The Brussels Effect

The EU has historically established de facto global standards through the sheer size and regulatory coherence of its single market. The General Data Protection Regulation (GDPR) is the most prominent example: although it technically applies only within the EU, it has become the reference standard for data protection globally. The EU AI Act is widely expected to follow the same trajectory. Organisations that build their AI governance to EU AI Act standards will find themselves well-positioned for compliance with emerging regulations in other jurisdictions, including Canada’s Artificial Intelligence and Data Act (AIDA), Brazil’s AI regulatory framework, and the evolving patchwork of US state-level AI legislation.

Signal to Boards and Investors

The existence of a comprehensive AI regulatory framework with significant penalties changes the risk calculus for boards of directors and investors. AI is no longer an unregulated frontier where governance is optional. The EU AI Act transforms AI governance from a voluntary best practice into a legal obligation with quantified financial consequences. This shift typically accelerates board-level engagement with AI governance — which is precisely the organisational dynamic that the COMPEL framework is designed to support.

What the EU AI Act Covers

The EU AI Act regulates AI systems, which it defines broadly. Under Article 3(1), an AI system is “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

This definition is deliberately broad. It covers:

  • Machine learning systems including deep learning, supervised, unsupervised, and reinforcement learning approaches
  • Logic and knowledge-based systems including expert systems, knowledge graphs, and rule-based reasoning
  • Statistical and Bayesian approaches including search and optimisation methods
  • Generative AI systems including large language models, image generators, and code generators
  • Multi-agent and agentic systems where AI systems orchestrate or delegate tasks autonomously

The definition intentionally avoids being technology-specific, ensuring that the regulation remains relevant as AI technologies evolve. If a system meets the functional definition — machine-based, operates with some autonomy, infers outputs from inputs — it is an AI system under the regulation.

What Is Not Covered

The EU AI Act explicitly excludes several categories from its scope (Article 2(3)-(12)):

  • AI systems developed and used exclusively for military purposes
  • AI systems used by third-country authorities for international law enforcement cooperation (under specific conditions)
  • AI systems used exclusively for scientific research and development (the research exemption)
  • Natural persons using AI systems in the course of purely personal, non-professional activity
  • Free and open-source AI systems (with important exceptions: open-source high-risk systems and open-source GPAI models with systemic risk remain covered)

Understanding these exclusions is important for accurate scoping. The research exemption, in particular, is frequently misunderstood: it applies to research conducted before any AI system is placed on the market or put into service, not to research conducted using deployed AI systems.

The Risk-Based Approach

The architectural principle of the EU AI Act is risk-based regulation. Rather than imposing uniform requirements on all AI systems, the regulation calibrates obligations to the level of risk that a system poses to health, safety, fundamental rights, democracy, the rule of law, and the environment. This approach creates four risk categories with progressively more stringent requirements.

Unacceptable Risk — Prohibited Practices (Article 5)

At the top of the pyramid are AI practices that the EU considers fundamentally incompatible with European values and fundamental rights. These practices are prohibited outright, meaning they cannot be developed, placed on the market, or used within the EU under any circumstances (with very narrow law enforcement exceptions for real-time biometric identification).

The prohibited practices include:

  • Subliminal manipulation: AI systems that deploy subliminal techniques beyond a person’s consciousness to distort behaviour causing significant harm
  • Vulnerability exploitation: AI systems that exploit vulnerabilities due to age, disability, or social/economic situation
  • Social scoring: AI systems used by public authorities to evaluate persons based on social behaviour or personal characteristics, leading to detrimental treatment
  • Predictive policing (individual): AI systems that assess individual criminal risk based solely on profiling
  • Untargeted facial scraping: AI systems that create facial recognition databases through untargeted scraping from the internet or CCTV
  • Workplace/education emotion recognition: AI systems that infer emotions in workplaces or education (except for medical/safety purposes)
  • Biometric categorisation of sensitive attributes: AI systems that categorise persons based on biometric data to infer race, political opinions, religion, sexual orientation, etc.
  • Real-time remote biometric identification in public spaces: For law enforcement, except under very narrow, judicially authorised exceptions

For foundations-level practitioners, the key takeaway is straightforward: if your organisation’s AI system falls into any of these categories, it must be discontinued immediately. The prohibited practices provisions took effect on 2 February 2025 — they are already in force.

High Risk (Article 6, Annex I, Annex III)

The high-risk category is the most operationally significant part of the regulation. High-risk AI systems are subject to a comprehensive set of requirements covering their entire lifecycle, from design and development through deployment, monitoring, and eventual decommissioning.

A system is classified as high-risk through one of two pathways:

  1. Product safety pathway (Article 6(1)): The AI system is a product, or a safety component of a product, covered by EU harmonisation legislation listed in Annex I (medical devices, machinery, toys, aviation, vehicles, etc.) AND the product requires third-party conformity assessment.

  2. Annex III pathway (Article 6(2)): The AI system falls into one of eight categories listed in Annex III:

    • Biometric identification and categorisation
    • Critical infrastructure management and operation
    • Education and vocational training (access, assessment, monitoring)
    • Employment and worker management (recruitment, HR decisions, monitoring)
    • Essential services (credit scoring, insurance, public benefits, emergency dispatch)
    • Law enforcement (risk assessment, evidence evaluation, profiling)
    • Migration, asylum, and border control
    • Administration of justice and democratic processes

Each high-risk category is examined in detail in M3.4EU AI Act Article 6 High-Risk Classification Deep Dive (Module 3.4, Article 14), which provides the classification decision tree and analysis of edge cases.

Limited Risk — Transparency Obligations (Article 50)

Limited-risk AI systems are subject only to specific transparency obligations. These obligations exist to ensure that persons are not deceived about their interaction with AI or about the nature of AI-generated content.

The transparency obligations apply to:

  • Chatbots and virtual assistants: Must disclose that the user is interacting with an AI system
  • Emotion recognition and biometric categorisation: Must inform persons who are subject to these systems
  • Deepfake generators: Must disclose that content has been artificially generated or manipulated
  • AI-generated content: Must be marked in a machine-readable format to enable detection

Minimal Risk

All AI systems that do not fall into the above categories are classified as minimal risk. These systems are not subject to mandatory obligations under the EU AI Act, although providers are encouraged to voluntarily adopt codes of conduct (Article 95) that apply some of the high-risk requirements, particularly around environmental sustainability, accessibility, and diversity.

Key Roles Under the EU AI Act

The regulation defines distinct roles with different obligations. Understanding which role your organisation occupies is essential for determining your compliance obligations.

Provider (Article 3(3))

A provider is any natural or legal person that develops an AI system or has it developed and places it on the market or puts it into service under its own name or trademark. Providers bear the primary compliance burden for high-risk systems, including risk management, technical documentation, conformity assessment, and registration.

Deployer (Article 3(4))

A deployer is any natural or legal person that uses an AI system under its authority, except where the system is used in a personal, non-professional activity. Deployers have their own set of obligations, including using the system in accordance with the provider’s instructions, implementing human oversight measures, and monitoring the system’s operation. If the deployer is a public body or institution, additional obligations apply, including fundamental rights impact assessments.

Importer and Distributor

Importers and distributors who bring AI systems into the EU market have obligations to verify that the provider has completed the necessary conformity procedures. These roles are particularly relevant for organisations that procure AI systems from non-EU providers.

Authorised Representative

Non-EU providers may appoint an authorised representative established in the EU to act on their behalf for regulatory purposes.

Key Dates and Deadlines

The EU AI Act entered into force on 1 August 2024 and applies in stages:

DeadlineWhat Takes Effect
2 February 2025Prohibited AI practices (Article 5) and AI literacy obligations (Article 4)
2 August 2025GPAI model obligations (Articles 53-56), transparency obligations (Article 50), governance and penalties provisions
2 August 2026High-risk AI system obligations for Annex III systems (Articles 6(2), 8-15, 16-17, 26-27)
2 August 2027High-risk AI system obligations for Annex I product safety systems (Article 6(1))

The phased timeline is deliberately designed to give organisations time to prepare, with the most fundamentally objectionable practices addressed first and the operationally complex high-risk requirements given the longest preparation period.

How the EU AI Act Relates to the COMPEL Framework

The COMPEL framework was designed for disciplined AI transformation — and regulatory compliance is one of the most powerful catalysts for that discipline. Each COMPEL stage maps naturally to EU AI Act compliance activities:

  • Calibrate: AI system inventory, risk classification, gap analysis against regulatory requirements
  • Organize: Governance committee establishment, role assignment, training programme development
  • Model: Conformity assessment pathway design, documentation templates, quality management system design
  • Produce: Technical documentation creation, risk management implementation, conformity assessment execution
  • Evaluate: Mock inspections, validation of compliance measures, monitoring system verification
  • Learn: Lessons learned, post-market monitoring, continuous compliance improvement

This alignment is not coincidental. The COMPEL framework embodies the same principles that underpin the EU AI Act: risk-based governance, structured lifecycle management, evidence-based decision-making, and continuous improvement. Organisations that have already adopted COMPEL will find that their existing governance structures provide a substantial foundation for EU AI Act compliance.

How the EU AI Act Relates to Other Frameworks

The EU AI Act does not exist in isolation. It interacts with and complements several other regulatory and standards frameworks:

GDPR (Regulation (EU) 2016/679)

The EU AI Act explicitly recognises the continued application of the GDPR. Where AI systems process personal data, both regulations apply simultaneously. The data governance requirements of Article 10 are designed to complement GDPR principles, and the fundamental rights impact assessment of Article 27 overlaps with GDPR’s Data Protection Impact Assessment (DPIA).

ISO/IEC 42001

The ISO standard for AI management systems provides a voluntary, certifiable framework that aligns closely with the EU AI Act’s quality management system requirements (Article 17). Organisations with ISO 42001 certification will have significant compliance acceleration.

NIST AI Risk Management Framework

The US NIST AI RMF shares the risk-based philosophy of the EU AI Act but takes a voluntary, guidance-based approach rather than a mandatory regulatory one. The two frameworks are complementary: NIST AI RMF provides detailed implementation guidance that can support EU AI Act compliance activities.

Sector-Specific Regulation

The EU AI Act’s product safety pathway (Article 6(1)) explicitly integrates with existing sector-specific regulation through Annex I. AI systems in medical devices, aviation, automotive, and other regulated sectors will face requirements from both the EU AI Act and the applicable sectoral legislation. The conformity assessment procedures are designed to align with existing sectoral procedures to minimise duplication.

Getting Started: First Steps for Organisations

For organisations beginning their EU AI Act journey, the following sequence of actions provides a structured starting point:

  1. Scope Assessment: Determine whether your organisation falls within the EU AI Act’s territorial and material scope. Does your organisation develop, deploy, import, or distribute AI systems? Do any of those systems affect persons within the EU?

  2. AI System Inventory: Catalogue all AI systems in use across the organisation, including procured SaaS products with AI capabilities, internally developed models, and any general-purpose AI models used as components.

  3. Preliminary Risk Classification: For each inventoried system, conduct a preliminary assessment against the prohibited practices (Article 5) and high-risk categories (Article 6, Annex III). This does not need to be definitive at this stage — it is about identifying systems that require deeper analysis.

  4. Role Determination: For each AI system, determine your organisation’s role (provider, deployer, importer, distributor) as this determines which obligations apply.

  5. Timeline Alignment: Map your AI systems against the compliance deadlines to understand which obligations are already in force and which are approaching.

These first steps align directly with the Calibrate stage of the COMPEL framework and are explored in much greater operational detail in M2.6EU AI Act Compliance for Practitioners (Module 2.6, Article 11).

Common Misconceptions

Several misconceptions about the EU AI Act circulate in organisational discussions. Addressing them early prevents costly misunderstandings.

“We are not in the EU, so it does not apply to us.” The extraterritorial scope means that any organisation whose AI system outputs are used within the EU must comply, regardless of where the organisation is based.

“Our AI systems are just analytics — they are not covered.” The definition of an AI system is broad. If the system infers outputs from inputs and operates with some degree of autonomy, it is likely covered. Simple rule-based automation and traditional statistical analysis may fall outside the definition, but any system using machine learning almost certainly falls within it.

“Open-source AI is exempt.” Open-source AI systems enjoy a limited exemption, but high-risk open-source systems and open-source GPAI models with systemic risk remain fully covered.

“We just use AI, we do not develop it.” Deployers have their own set of obligations under Article 26, including using systems in accordance with instructions, implementing human oversight, and monitoring operations. Procuring an AI system does not eliminate regulatory responsibility.

“We have until 2027 to worry about this.” The prohibited practices provisions took effect on 2 February 2025. GPAI and transparency obligations take effect on 2 August 2025. Only certain Annex I product safety systems have until 2027. Most organisations face imminent or near-term deadlines.

Moving Forward

This article has provided a foundations-level orientation to the EU AI Act. The regulation is complex, but its underlying logic is straightforward: higher-risk AI systems face more stringent requirements, and organisations must understand their systems, classify their risks, and implement proportionate governance measures.

The subsequent articles in this module and at higher certification levels provide progressively more detailed and operational guidance:

  • M1.5EU AI Act Risk Categories and Your Organization (Article 14, this module) guides self-assessment of organisational exposure
  • EU AI Act Compliance for Practitioners (Module 2.6, Article 11) provides hands-on implementation guidance
  • EU AI Act Article 6 High-Risk Classification Deep Dive (Module 3.4, Article 14) provides the detailed classification decision tree
  • M3.4100-Day EU AI Act Readiness Using COMPEL (Module 3.4, Article 17) provides the structured implementation plan

The EU AI Act is not a threat to be feared — it is a framework to be leveraged. Organisations that approach compliance as a governance improvement opportunity rather than a bureaucratic burden will find that the regulation accelerates exactly the kind of structured, risk-aware, evidence-based AI governance that creates sustainable competitive advantage.