This article provides a governance-professional-level analysis of the Article 6 classification framework. It examines each Annex III category in depth, analyses edge cases and classification challenges, unpacks the Article 6(3) exception, and presents case studies that illustrate how classification decisions play out in practice.
The Two Pathways to High-Risk Classification
Article 6 establishes two independent pathways through which an AI system becomes classified as high-risk.
Pathway 1: Product Safety (Article 6(1))
An AI system is high-risk if two conditions are simultaneously met:
- The AI system is intended to be used as a safety component of a product, or the AI system is itself a product, that is covered by Union harmonisation legislation listed in Annex I
- The product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex I
Annex I legislation includes:
| Legislation | Relevant Products |
|---|---|
| Machinery Regulation (EU) 2023/1230 | Industrial robots, autonomous machinery, safety systems |
| Medical Devices Regulation (EU) 2017/745 | AI-assisted diagnostic tools, AI-driven medical imaging, clinical decision support |
| In Vitro Diagnostic Medical Devices Regulation (EU) 2017/746 | AI-based lab diagnostics, pathology AI |
| Radio Equipment Directive 2014/53/EU | Connected devices with AI capabilities |
| Toy Safety Directive 2009/48/EC | AI-powered interactive toys |
| Civil Aviation Regulation (EU) 2018/1139 | Autonomous flight systems, air traffic management AI |
| Motor Vehicle Type-Approval (EU) 2019/2144 | Advanced driver assistance, autonomous driving systems |
| Marine Equipment Directive 2014/90/EU | Autonomous vessel navigation systems |
Classification challenge: “Safety component” interpretation
The term “safety component” is critical. An AI system integrated into a medical device is clearly a safety component. But what about an AI system that performs scheduling optimisation for a medical device manufacturer? The scheduling system is not a safety component of the medical device — it is a business tool used by the manufacturer. The classification depends on whether the AI system’s function directly affects the safety of the product.
Governance professional guidance: When assessing Pathway 1, focus on functional relationship to safety. Ask: “If this AI system malfunctions, could it directly cause the product to operate unsafely?” If yes, it is likely a safety component. If the AI system’s malfunction would only affect business efficiency, not product safety, Pathway 1 does not apply.
Pathway 2: Annex III Standalone High-Risk (Article 6(2))
AI systems referred to in Annex III are considered high-risk. This pathway is independent of product safety — it captures AI systems that pose risks to fundamental rights, health, and safety through their use in sensitive domains, regardless of whether they are embedded in regulated products.
Annex III Deep Dive: Category-by-Category Analysis
Category 1: Biometrics
Scope: AI systems intended for:
- (a) Remote biometric identification (excluding real-time in public spaces by law enforcement, which is prohibited under Article 5)
- (b) Biometric categorisation according to sensitive or protected attributes, or emotion recognition
Edge cases and classification challenges:
Edge case 1: Employee badge access using facial recognition. A company deploys facial recognition for building access control. This is a biometric identification system — it identifies a specific person from their biometric data. It falls within Category 1(a) as remote biometric identification. It is high-risk.
Edge case 2: Age estimation for content filtering. A social media platform uses AI to estimate whether a user is a minor based on facial analysis. Age estimation that does not identify specific individuals may be categorised as biometric categorisation under Category 1(b). The classification depends on whether the system processes biometric data and whether it categorises persons according to a sensitive or protected attribute. Age can be considered a protected attribute.
Edge case 3: Voice biometrics for customer authentication. A bank uses voiceprint analysis to authenticate customers on telephone calls. This is biometric identification (verifying a person’s claimed identity against their voiceprint). It falls within Category 1(a) and is high-risk.
Edge case 4: Sentiment analysis on text. An AI system analyses customer emails to detect satisfaction levels. This processes text, not biometric data — it does not fall within Category 1. However, if the system analyses vocal tone, facial expressions, or other biometric signals to infer emotion, it would constitute emotion recognition under Category 1(b).
Category 2: Critical Infrastructure
Scope: AI systems intended to be used as safety components in the management and operation of:
- (a) Critical digital infrastructure
- (b) Road traffic
- (c) Supply of water, gas, heating, and electricity
Edge cases and classification challenges:
Edge case 1: Data centre cooling optimisation. An AI system optimises cooling in a data centre that hosts critical financial infrastructure. If the data centre is part of critical digital infrastructure (as defined by the NIS2 Directive or the Critical Entities Resilience Directive), and the AI system is a safety component (failure could cause equipment damage or outage), it falls within Category 2(a).
Edge case 2: Fleet routing for utility vehicles. A utility company uses AI to optimise routing for maintenance crews. This is a logistics tool, not a safety component in the management of water, gas, heating, or electricity supply. It does not fall within Category 2(c). However, if the same AI system prioritises which equipment failures to address first and a deprioritised failure could endanger safety, the classification requires closer examination.
Edge case 3: Predictive maintenance for a power grid. An AI system predicts equipment failure in an electricity transmission network. If the system is a safety component — its predictions inform decisions about whether equipment is safe to continue operating — it falls within Category 2(c).
Category 3: Education and Vocational Training
Scope: AI systems intended to be used for:
- (a) Determining access or admission to educational and vocational training institutions
- (b) Evaluating learning outcomes, including steering the learning process
- (c) Determining the appropriate level of education for a person
- (d) Monitoring and detecting prohibited behaviour of students during tests
Edge cases and classification challenges:
Edge case 1: Corporate learning platforms. A corporate e-learning platform uses AI to recommend courses to employees. If the recommendations do not determine access to education, evaluate outcomes, or determine education levels — if they are purely suggestive — they likely do not fall within Category 3. But if the system determines which employees qualify for certification or advancement based on AI-assessed learning outcomes, it may fall within Category 3(b).
Edge case 2: AI tutoring systems. An AI tutor that adjusts content difficulty based on student performance is steering the learning process under Category 3(b). It is high-risk.
Edge case 3: Plagiarism detection. An AI system that detects plagiarism in student submissions is monitoring for prohibited behaviour under Category 3(d). It is high-risk.
Category 4: Employment, Workers Management, and Access to Self-Employment
Scope: AI systems intended to be used for:
- (a) Recruitment and selection (targeted advertisements, filtering, evaluating candidates)
- (b) Decisions on terms of work relationships, promotion, or termination, or task allocation based on individual behaviour or personal characteristics
- (c) Monitoring and evaluation of worker performance and behaviour
Edge cases and classification challenges:
Edge case 1: Automated scheduling based on skills. An AI system assigns shifts to workers based on their skills, availability, and performance metrics. Task allocation based on individual behaviour or personal characteristics falls within Category 4(b). This is high-risk.
Edge case 2: Meeting transcription and summarisation. An AI tool that transcribes and summarises meetings attended by employees. If the tool simply produces transcripts, it is not monitoring or evaluating workers. But if the transcripts are analysed to assess employee participation, communication style, or performance indicators, it shifts toward Category 4(c).
Edge case 3: AI-generated job descriptions. An AI system that generates job description text is not evaluating candidates or making employment decisions. It does not fall within Category 4.
Category 5: Access to Essential Services
Scope: AI systems intended to be used for:
- (a) Evaluating eligibility for public assistance benefits and services
- (b) Assessing creditworthiness of natural persons (with exception for fraud detection)
- (c) Risk assessment and pricing in life and health insurance
- (d) Evaluating and classifying emergency calls and dispatching/triaging
Edge cases and classification challenges:
Edge case 1: Fraud detection in insurance claims. Article 6(2) provides an explicit carve-out: AI systems used for the purpose of detecting financial fraud do not fall within Category 5. However, if the fraud detection system also influences the credit assessment, the carve-out may not fully apply.
Edge case 2: Dynamic pricing for non-essential services. An AI system that personalises pricing for a streaming subscription is not covered — streaming is not an essential service and the system does not assess creditworthiness. But an AI system that personalises pricing for health insurance is squarely within Category 5(c).
Edge case 3: Chatbot triage in emergency services. An AI system that triages incoming emergency calls and determines priority or resource dispatch falls within Category 5(d). This is high-risk.
Categories 6-8: Law Enforcement, Migration, and Justice
These categories are primarily relevant to public authorities and their contractors. Governance professionals in private-sector organisations encounter them less frequently, but should be aware of them for two reasons:
-
Supply chain exposure: If your organisation provides AI systems to law enforcement, migration authorities, or courts, your systems may be classified as high-risk through the customer’s use case, even if your organisation’s own use case would not trigger classification.
-
Election influence: The final item in Annex III covers AI systems intended to influence the outcome of elections or referenda, or to influence voting behaviour. This is relevant to media companies, social platforms, and political communications firms.
The Article 6(3) Exception
Article 6(3) provides that an AI system listed in Annex III shall not be considered high-risk where it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons, including by not materially influencing the outcome of decision-making.
The exception applies when the AI system:
- Performs a narrow procedural task
- Improves the result of a previously completed human activity
- Detects decision-making patterns or deviations from prior patterns without replacing or influencing human assessment
- Performs a preparatory task to an assessment relevant for the Annex III use cases
The exception does NOT apply when:
- The AI system performs profiling of natural persons (as defined in GDPR Article 4(4))
Governance professional guidance on the Article 6(3) exception:
This exception is valuable but must be used with caution and thorough documentation. Key principles:
-
The burden of proof is on the provider. You must affirmatively demonstrate that the exception applies. If you cannot, the system defaults to high-risk.
-
Document your reasoning. The exception assessment must be documented and maintained as part of your compliance records. Competent authorities can request it at any time.
-
“Materially influencing” is the key phrase. If the AI system’s output is routinely followed by human decision-makers without independent analysis, the system is materially influencing outcomes regardless of whether a human technically makes the final decision. Rubber-stamping is not human oversight.
-
Profiling disqualifies. If the system profiles individuals — automated processing of personal data to evaluate personal aspects such as work performance, economic situation, health, personal preferences, interests, reliability, behaviour, location, or movements — the exception cannot apply.
-
Review regularly. An exception assessment is valid only for the system’s current use. If the system’s role in the decision-making process changes, the exception assessment must be repeated.
Case Studies in Classification
Case Study 1: HR Analytics Platform
System description: A cloud-based HR analytics platform that aggregates employee data (performance reviews, project outcomes, attendance records, communication patterns) and produces dashboards showing team performance, attrition risk scores, and succession planning recommendations.
Classification analysis:
The system aggregates employee data and produces performance-related analytics. This involves monitoring and evaluation of worker performance (Category 4(c)). The attrition risk scores are assessments of individual workers based on their behaviour and personal characteristics.
The platform provider might argue that the dashboards are merely informational and that all decisions are made by human managers. However, if managers routinely rely on the system’s attrition risk scores to make retention decisions (bonuses, promotions, task assignments), the system is materially influencing employment decisions under Category 4(b).
Classification: High-risk under Annex III Category 4(b) and 4(c). The Article 6(3) exception is unlikely to apply because the system profiles individuals.
Case Study 2: AI-Powered Customer Service Chatbot
System description: A large language model-powered chatbot deployed on a retail company’s website that answers customer queries about products, processes returns, and escalates complex issues to human agents.
Classification analysis:
The chatbot does not fall into any Annex III category. It does not assess creditworthiness, determine access to essential services, perform biometric identification, or make employment decisions. Retail customer service is not a regulated domain under Annex III.
However, the chatbot does interact directly with natural persons and could be mistaken for a human agent. It triggers the Article 50(1) transparency obligation — the company must ensure that users know they are interacting with an AI system.
Classification: Limited risk — Article 50 transparency obligations apply. The chatbot must disclose its AI nature to users.
Case Study 3: Predictive Maintenance for Hospital Equipment
System description: An AI system monitors hospital medical equipment (MRI machines, ventilators, infusion pumps) and predicts when maintenance is needed to prevent equipment failure.
Classification analysis:
Medical devices are covered by the Medical Devices Regulation (EU) 2017/745, which is listed in Annex I. If the AI system is a safety component of the medical device (i.e., the predictive maintenance directly affects whether the device is safe to operate), it falls within Article 6(1). If the AI system’s malfunction could lead to the medical device operating unsafely (e.g., failing to predict an imminent ventilator failure), it is a safety component.
However, if the AI system is a separate facility management tool — not integrated into the medical device itself — it may not be a safety component of the device. It could still be critical infrastructure if the hospital qualifies as critical digital infrastructure.
Classification: Likely high-risk under Article 6(1) if integrated into medical device operations. Requires detailed analysis of the functional relationship between the AI system and the medical device’s safety.
Case Study 4: AI-Generated Marketing Content
System description: A marketing team uses a generative AI tool to create product descriptions, social media posts, and advertising copy.
Classification analysis:
The system does not fall into any Annex III category. It does not make decisions about individuals’ access to services, employment, education, or other regulated domains. The content is reviewed by human marketers before publication.
The system does generate synthetic content (text), which triggers Article 50(2) — the outputs must be marked in a machine-readable format as AI-generated. If the system generates images or video, the deepfake disclosure requirements of Article 50(4) may also apply.
Classification: Limited risk (or minimal risk if the generated content is clearly non-deceptive and does not require marking). Article 50 transparency obligations likely apply.
Case Study 5: Insurance Claim Processing
System description: An insurance company deploys an AI system that evaluates property damage claims by analysing photographs of damage, cross-referencing policy terms, and recommending settlement amounts to human adjusters.
Classification analysis:
If the system influences settlement decisions for life or health insurance, it falls within Category 5(c). For property insurance, the analysis is less clear — Category 5 specifically references “life and health insurance” for risk assessment and pricing.
However, if the system’s recommendations are routinely followed by adjusters (materially influencing outcomes), and if the insurance is an essential service (which property insurance may not be in all contexts), the classification requires careful analysis.
If the system is used for fraud detection only, it benefits from the explicit fraud detection carve-out in Article 6(2).
Classification: Depends on specifics. Life and health insurance claims processing is likely high-risk. Property insurance claims processing requires analysis of whether Article 6(3) exception or Category 5 scope applies.
Strategic Implications of Classification
For governance professionals, classification decisions have strategic implications beyond compliance:
Resource allocation: Each high-risk system requires significant compliance investment (technical documentation, risk management, conformity assessment, ongoing monitoring). Accurate classification prevents over-investment in minimal-risk systems and under-investment in high-risk ones.
Product strategy: Classification may influence product design decisions. A feature that pushes a product from minimal risk to high risk adds compliance cost that must be factored into the business case.
Vendor management: If your organisation deploys third-party AI systems, classification determines the deployer obligations you must fulfil and the provider documentation you must obtain.
Board reporting: The number and nature of high-risk systems in the portfolio determines the organisation’s regulatory risk exposure. Accurate classification feeds directly into board-level risk reporting.
The classification decision tree is implemented in the COMPEL platform’s EU AI Act Compliance Accelerator, providing an interactive, guided classification experience with documented rationale output. Governance professionals should use this tool to conduct and document classifications, and should review all classifications at least annually or when system purpose, scope, or context changes.