Skip to main content
AITF M1.5-Art14 v1.0 Reviewed 2026-04-06 Open Access
M1.5 Governance, Risk, and Compliance for AI
AITF · Foundations

EU AI Act Risk Categories and Your Organization

EU AI Act Risk Categories and Your Organization — AI Governance & Compliance — Foundation depth — COMPEL Body of Knowledge.

14 min read Article 14 of 17

This article equips foundations-level practitioners with the ability to conduct an initial organisational exposure assessment, understand each risk category through real-world examples, recognise common classification pitfalls, and identify the first practical steps toward compliance.

The Four Risk Categories in Practice

Unacceptable Risk: What Gets Banned

The prohibited practices in Article 5 represent the EU’s bright lines — AI applications that are considered so fundamentally incompatible with European values that no level of governance or oversight can make them acceptable.

Recognising Prohibited Practices in Your Organisation

The most common way organisations encounter prohibited practices is not through intentional deployment of banned systems, but through well-intentioned applications that inadvertently cross a line. Consider these scenarios:

  • A retail company deploys an AI system that analyses customer facial expressions during in-store interactions to adjust sales approaches. If the system infers emotions, it may fall under the prohibition on emotion recognition outside medical/safety contexts, depending on whether the in-store environment constitutes a “workplace” for employees exposed to it.

  • A financial services firm develops an AI model that uses social media behaviour patterns as input features for creditworthiness assessment. If the model effectively evaluates persons based on social behaviour and produces detrimental treatment, it risks triggering the social scoring prohibition.

  • A human resources technology company builds an AI tool that analyses employee communications to predict attrition risk, using behavioural signals that may function as emotion inference in the workplace.

None of these organisations set out to build prohibited systems. But the prohibition is defined by the function and effect of the system, not by the intent of the developer. This is why systematic screening against Article 5 is the essential first step in any compliance programme.

Self-Assessment Questions for Prohibited Practices:

  1. Does any AI system in our portfolio analyse or infer the emotional state of employees, job applicants, or students?
  2. Do any AI systems use behavioural data to score, rank, or categorise individuals in ways that could affect their access to services?
  3. Do any AI systems collect or process biometric data (facial images, voice patterns, gait analysis) without specific, targeted justification?
  4. Are any AI systems designed to influence behaviour through techniques that operate below conscious awareness?
  5. Do any AI systems assess individual risk of criminal behaviour or recidivism based on personal characteristics rather than verifiable facts linked to criminal activity?

If the answer to any of these questions is “possibly” or “yes,” the system requires immediate detailed assessment against Article 5 by qualified legal counsel.

High Risk: The Operational Core of the Regulation

The high-risk category is where the EU AI Act has its most significant operational impact. High-risk classification triggers a comprehensive set of requirements that touch every aspect of the AI system lifecycle.

Understanding Annex III Through Organisational Functions

Rather than thinking about Annex III as an abstract list of categories, it is more practical to map it to the functions within a typical organisation:

Organisational FunctionAnnex III CategoryExample Systems
Human ResourcesCategory 4: EmploymentResume screening, candidate ranking, performance evaluation, promotion recommendation, workforce planning, automated scheduling based on individual assessments
Customer ServiceCategory 5: Essential ServicesCredit scoring, insurance risk assessment, eligibility determination for services
Facilities / OperationsCategory 2: Critical InfrastructureBuilding management AI controlling HVAC in critical facilities, predictive maintenance for critical systems
SecurityCategory 1: BiometricsFacial recognition access control, biometric time and attendance
Legal / ComplianceCategory 8: JusticeAI-assisted contract analysis tools used in dispute resolution
Learning & DevelopmentCategory 3: EducationAI-powered learning platforms that determine course assignments or evaluate learning outcomes (if used within accredited educational contexts)

The Article 6(3) Exception

An important nuance: Article 6(3) provides that an AI system listed in Annex III is not considered high-risk if it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons, including by not materially influencing the outcome of decision-making. This exception applies when the AI system:

  • Performs a narrow procedural task
  • Improves the result of a previously completed human activity
  • Detects decision-making patterns without replacing or influencing human assessment
  • Performs a preparatory task for an assessment that is relevant for the purpose of the use cases listed in Annex III

However, the exception does not apply if the AI system performs profiling of natural persons. Providers who wish to rely on this exception must document their assessment and make it available to competent authorities.

Self-Assessment Questions for High-Risk Classification:

  1. Do any AI systems in our portfolio make or materially influence decisions about individuals’ access to education, employment, financial services, public benefits, or healthcare?
  2. Are any AI systems deployed as safety components in infrastructure that, if it fails, could endanger health or safety?
  3. Do any AI systems process biometric data for identification or categorisation purposes?
  4. Are any AI systems embedded in products covered by EU product safety legislation (medical devices, machinery, vehicles, etc.)?
  5. Do any AI systems influence law enforcement, migration, or judicial decisions?
  6. For systems that appear to be in Annex III categories, can we credibly demonstrate they meet the Article 6(3) exception criteria?

Limited Risk: The Transparency Imperative

Limited-risk classification applies to AI systems that interact with persons or generate content in ways that could be mistaken for human activity or real content. The obligations are narrower than for high-risk systems but are nonetheless legally binding.

Common Limited-Risk Systems in Organisations:

  • Customer-facing chatbots and virtual assistants: Any AI system that interacts with customers through conversation must disclose that the interaction involves AI. This applies to website chatbots, phone-based virtual agents, and messaging-based support bots.

  • AI-generated marketing content: If your marketing team uses AI to generate text, images, or video for campaigns, the generated content must be marked in a machine-readable format. This does not necessarily require visible labelling in all cases, but the content must carry machine-readable metadata indicating AI generation.

  • Synthetic media and deepfakes: AI systems that generate or manipulate images, audio, or video to resemble real persons or events must disclose the artificial nature of the content.

  • AI-powered email or communication tools: Systems that draft, suggest, or auto-complete communications may trigger transparency obligations if they could lead recipients to believe they are interacting with a human.

Self-Assessment Questions for Limited Risk:

  1. Do any of our AI systems interact directly with natural persons (customers, employees, partners) in a conversational or interactive manner?
  2. Do we use AI to generate or substantially modify text, images, audio, or video content?
  3. Could any persons reasonably mistake AI-generated outputs for human-created content or human interaction?
  4. Are emotion recognition or biometric categorisation systems used in contexts not classified as high-risk?

Minimal Risk: Voluntary but Not Irrelevant

AI systems classified as minimal risk are not subject to mandatory obligations, but this does not mean they should be ignored in your governance programme. There are several reasons to maintain governance over minimal-risk systems:

Reclassification Risk: A system classified as minimal risk today may be reclassified if its use case changes. An internal forecasting tool that is later used to make decisions about employee task allocation could shift into Category 4 of Annex III.

Voluntary Codes of Conduct: Article 95 encourages providers of minimal-risk systems to adopt voluntary codes of conduct that apply some high-risk requirements, particularly around environmental sustainability, diversity and inclusion, and accessibility.

Organisational Consistency: Maintaining baseline governance across all AI systems — including minimal-risk ones — ensures consistency and makes it easier to comply if reclassification occurs.

Reputational Risk: Even minimal-risk systems can create reputational harm if they produce biased, inaccurate, or otherwise problematic outputs. Governance addresses organisational risk beyond regulatory compliance.

Common Classification Mistakes

Mistake 1: Confusing the Provider’s Intended Purpose with Actual Use

The EU AI Act classifies systems based on their intended purpose (as defined by the provider) but also considers reasonably foreseeable misuse. A provider who markets an AI tool as “general workplace analytics” but whose tool is foreseeably used for individual employee performance scoring cannot avoid Category 4 classification by claiming the tool was not intended for that purpose.

Mistake 2: Assuming Procurement Eliminates Compliance Obligations

Organisations that procure rather than develop AI systems are deployers under the regulation. Deployers have their own set of obligations (Article 26), and procuring a certified high-risk system does not eliminate the deployer’s responsibility to use it in accordance with the provider’s instructions, implement human oversight, and monitor operations.

Mistake 3: Treating Classification as a One-Time Exercise

Risk classification must be reassessed when the AI system’s purpose, scope, affected population, or operating context changes. An AI system that was minimal risk in a pilot environment may become high-risk when deployed at scale or applied to a different use case.

Mistake 4: Over-Relying on the Article 6(3) Exception

The narrow procedural task exception is not a blanket escape from high-risk classification. The burden of proof is on the provider to demonstrate that the exception applies, and the exception explicitly does not cover systems that perform profiling. Organisations should not plan their compliance strategy around an untested exception.

Mistake 5: Ignoring AI Systems Embedded in Third-Party Software

Many organisations use AI systems without realising it — embedded in CRM platforms, productivity suites, HR tools, and business intelligence software. These systems are within the scope of the EU AI Act, and the organisation using them is a deployer with corresponding obligations.

Organisational Exposure Assessment

To assess your organisation’s overall EU AI Act exposure, work through the following structured assessment. This is not a substitute for detailed legal analysis but provides a directional view of compliance scope and priority.

Step 1: Inventory Completeness Check

Before you can classify, you must inventory. Common blind spots include:

  • AI features embedded in enterprise SaaS platforms (CRM, ERP, HCM, ITSM)
  • AI-powered analytics and business intelligence tools
  • Chatbots and virtual assistants across customer, HR, and IT service channels
  • AI-based cybersecurity tools (threat detection, anomaly detection)
  • AI-driven marketing tools (personalisation, content generation, programmatic advertising)
  • Robotic Process Automation (RPA) with machine learning components
  • AI systems used by vendors or contractors on the organisation’s behalf

Step 2: Classification Triage

For each inventoried system, apply the classification in this order:

  1. Screen against Article 5 prohibited practices — any match requires immediate legal assessment
  2. Check against Annex I product safety legislation — any match triggers Article 6(1)
  3. Check against Annex III categories — any match triggers Article 6(2) (subject to Article 6(3) exception assessment)
  4. Screen for transparency triggers under Article 50 — any match triggers limited-risk obligations
  5. Remaining systems are minimal risk

Step 3: Deadline Mapping

Map each classified system to the applicable compliance deadline:

  • Prohibited: Already in force (since 2 February 2025)
  • GPAI obligations: 2 August 2025
  • Transparency obligations: 2 August 2025
  • Annex III high-risk: 2 August 2026
  • Annex I high-risk: 2 August 2027

Step 4: Gap Prioritisation

For each system with an approaching deadline, assess the current state of compliance:

  • What documentation exists? How does it compare to Annex IV requirements?
  • Is there a risk management system in place? Does it meet Article 9?
  • Are human oversight mechanisms implemented and operational?
  • Are logging capabilities sufficient to meet Article 12?
  • Has the deployer received adequate instructions for use?

The gap between current state and required state, multiplied by the urgency of the deadline, determines compliance priority.

The Classification Decision Tree

For practitioners ready to apply the classification systematically, the decision tree follows this logic:

START

  ├─ Is the system a GPAI model? → GPAI branch
  │   ├─ Training compute > 10^25 FLOPs or Commission designation? → GPAI Systemic Risk
  │   └─ Below threshold? → GPAI Standard

  └─ Is it an AI system for a specific purpose? → AI System branch

      ├─ Does it fall under Article 5 prohibited practices? → PROHIBITED

      ├─ Is it a product/safety component under Annex I
      │   requiring third-party conformity assessment? → HIGH RISK

      ├─ Does it fall into an Annex III category? → Likely HIGH RISK
      │   └─ Does Article 6(3) exception apply? → If yes, NOT high risk

      ├─ Does it trigger Article 50 transparency obligations? → LIMITED RISK

      └─ None of the above → MINIMAL RISK

The detailed interactive decision tree with specific questions for each node is available in the M3.4EU AI Act Article 6 High-Risk Classification Deep Dive (Module 3.4, Article 14) and is implemented in the COMPEL platform’s EU AI Act Compliance Accelerator.

First Steps for Your Organisation

Immediate Actions (This Quarter)

  1. Appoint a coordinator: Designate an individual or small team responsible for coordinating the EU AI Act compliance assessment. This does not need to be a new hire — it can be an existing governance, compliance, or risk management professional.

  2. Conduct an AI system inventory: Even a preliminary inventory provides visibility. Start with known AI systems and expand through departmental surveys.

  3. Screen for prohibited practices: Apply the Article 5 screening to all known systems. This is the most time-critical assessment because the prohibition is already in force.

  4. Brief leadership: Ensure executive leadership and, where appropriate, the board understand the regulation’s scope, timeline, and potential financial exposure. The penalty structure (up to 7% of global turnover for prohibited practices) commands attention.

Near-Term Actions (Next Two Quarters)

  1. Complete classification: Apply the full classification framework to all inventoried systems, with particular attention to Annex III categories.

  2. Assess GPAI exposure: Determine whether you develop or deploy GPAI models, and assess obligations accordingly.

  3. Conduct gap analysis: For high-risk systems, assess the gap between current governance practices and EU AI Act requirements.

  4. Engage legal counsel: For complex classification questions, edge cases, and systems that span multiple categories, engage legal counsel with specific EU AI Act expertise.

Medium-Term Actions (Within 12 Months)

  1. Begin remediation: Address identified gaps in documentation, risk management, human oversight, and other high-risk system requirements.

  2. Establish governance structures: Create or extend existing governance forums to address EU AI Act compliance on an ongoing basis.

  3. Plan for conformity assessment: For high-risk systems approaching the August 2026 deadline, begin conformity assessment preparation, including notified body engagement where required.

Connecting to Your COMPEL Journey

The EU AI Act risk classification exercise is fundamentally a Calibrate-stage activity in the COMPEL framework. It establishes the baseline: what AI systems exist, what risks they carry, and what governance measures are required. This baseline then flows into the Organize stage (establishing governance structures), the Model stage (designing compliance processes), the Produce stage (implementing documentation and controls), the Evaluate stage (validating compliance), and the Learn stage (sustaining and improving governance over time).

Practitioners who have completed the COMPEL Foundations certification already possess the conceptual tools needed to engage with EU AI Act compliance. The regulation does not require a fundamentally different approach to governance — it provides a specific, legally binding instantiation of the governance principles that COMPEL teaches.

The more detailed, operational articles at the Practitioner and Governance Professional levels — particularly M2.6Building EU AI Act Evidence Portfolios (Module 2.6, Article 12), M3.4Conformity Assessment Pathways (Module 3.4, Article 15), and the M3.4100-Day EU AI Act Readiness Using COMPEL (Module 3.4, Article 17) — provide the implementation guidance needed to translate classification results into concrete compliance action.

Risk classification is where compliance begins. Everything that follows depends on getting it right.