COMPEL Glossary / eu-ai-act
EU AI Act
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for regulating artificial intelligence, adopted by the European Parliament in March 2024 and entering into force in August 2024.
What this means in practice
It establishes a risk-based classification system for AI systems — unacceptable risk (banned), high-risk (regulated), limited risk (transparency obligations), and minimal risk (no specific obligations). High-risk AI systems — including those used in employment decisions, credit scoring, critical infrastructure, and biometric identification — face mandatory conformity assessments, technical documentation requirements, human oversight obligations, and post-market monitoring.
Why it matters
Any organization operating in the EU, selling AI systems into the EU market, or deploying EU-regulated AI use cases faces legal compliance obligations under the EU AI Act with phased enforcement starting in 2025. Non-compliance carries fines of up to €35 million or 7% of global annual turnover. Early preparation is significantly less costly than reactive compliance. The Act's extraterritorial scope means that non-EU organizations whose AI systems affect EU residents are also subject to its requirements.
How COMPEL uses it
The COMPEL Calibrate stage includes EU AI Act exposure mapping — identifying which AI systems in the organization's inventory fall under high-risk categories. The Model stage designs the technical documentation and human oversight mechanisms required for high-risk systems. COMPEL's Evaluate stage generates the conformity assessment evidence required for CE marking under the Act. The COMPEL standards mapping tool provides article-level traceability between COMPEL governance domains and EU AI Act obligations.
Common mistakes
- Assuming the EU AI Act applies only to EU-based organizations — it has extraterritorial scope.
- Classifying all AI systems as minimal risk without conducting proper risk assessment.
- Treating compliance as a one-time documentation exercise rather than ongoing post-market monitoring.
- Failing to designate roles for AI system provider, deployer, and distributor obligations.
- Waiting for enforcement deadlines rather than beginning compliance preparation during the transition period.
See also
- ISO 42001 — ISO/IEC 42001:2023 is the first international management system standard for artificial intelligence, published jointly by the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC).
- NIST AI RMF — The NIST AI Risk Management Framework (AI RMF 1.0), published by the National Institute of Standards and Technology in January 2023, is a voluntary framework for managing risks associated with the design, development, deployment, and evaluation of AI products and services.
- AI Governance — AI governance is the system of policies, roles, processes, oversight bodies, and controls that an organization uses to manage AI systems responsibly across their full lifecycle.
- Responsible AI — Responsible AI is the practice of designing, developing, and deploying AI systems in ways that are ethical, transparent, fair, accountable, and safe — and that actively avoid creating harm to individuals, groups, or society.
References
- EU Regulation 2024/1689 — Regulation laying down harmonised rules on artificial intelligence (Regulation)
- European Commission — EU AI Act Implementation Guidelines (Guidance)
- ISO/IEC 42001:2023 — Harmonized standard pathway for EU AI Act (Standard)
Frequently asked questions
When does the EU AI Act take effect?
The EU AI Act entered into force in August 2024 with phased enforcement: prohibited AI practices from February 2025, high-risk system obligations from August 2026, and full enforcement including general-purpose AI model requirements by August 2027. Organizations should be preparing compliance programs now.
What AI systems are classified as high-risk under the EU AI Act?
High-risk classifications include AI systems used for biometric identification, critical infrastructure management, educational scoring, employment decisions, credit assessment, law enforcement, migration management, and the administration of justice. The Act also covers safety components of regulated products.
Does the EU AI Act affect organizations outside the EU?
Yes. The Act has extraterritorial scope. Any organization that places AI systems on the EU market, deploys AI systems whose outputs are used in the EU, or otherwise affects EU residents is subject to the regulation, regardless of where the organization is headquartered.
Related Terms
Other glossary terms mentioned in this entry's definition and context.