Skip to main content
AITP M2.6-Art11 v1.0 Reviewed 2026-04-06 Open Access
M2.6 Industry Applications and Case Study Analysis
AITP · Practitioner

EU AI Act Compliance for Practitioners

EU AI Act Compliance for Practitioners — Industry Applications & Case Studies — Applied depth — COMPEL Body of Knowledge.

14 min read Article 11 of 20

This is a practitioner-level article. It assumes familiarity with the EU AI Act risk categories (covered in Module 1.5, Articles 13-14) and focuses on how to implement compliance rather than what the regulation requires. The emphasis throughout is on practical techniques, common implementation patterns, and evidence-based compliance.

COMPEL-to-EU AI Act Requirements Mapping

The most powerful insight for practitioners is that the COMPEL framework already provides the governance architecture that the EU AI Act requires. The following mapping shows how each COMPEL stage naturally produces the evidence and controls that the regulation demands.

Calibrate Stage → Classification and Baseline

The Calibrate stage in COMPEL is about establishing the current state. For EU AI Act compliance, this translates directly into:

AI System Inventory (Supporting Article 49 Registration)

Every compliance journey begins with knowing what you have. The Calibrate stage’s baseline assessment naturally produces the AI system inventory required for EU database registration. Key practitioner activities:

  • Conduct department-by-department AI system discovery using structured questionnaires
  • Catalogue each system’s purpose, affected population, deployment geography, and data flows
  • Identify the organisation’s role (provider, deployer, importer) for each system
  • Document third-party AI components, including GPAI models used as building blocks
  • Flag shadow AI — systems deployed without formal governance approval

The inventory should be structured to capture the metadata required for EU database registration under Article 49: system name, provider details, intended purpose, risk classification, conformity status, and deployment Member States.

Risk Classification (Articles 5-6)

The Calibrate stage’s risk assessment maps directly to the EU AI Act’s classification framework. Practitioners should apply the classification decision tree systematically:

  1. Screen all systems against Article 5 prohibited practices
  2. Apply Article 6(1) — is the system a product or safety component under Annex I legislation?
  3. Apply Article 6(2) — does the system fall into any Annex III category?
  4. Assess Article 6(3) — for Annex III systems, does the narrow exception apply?
  5. Screen remaining systems for Article 50 transparency obligations
  6. Document classification rationale with specific Article and Annex references

Each classification decision must be documented with sufficient detail to withstand regulatory scrutiny. The practitioner’s role is not merely to classify, but to produce a classification rationale document that explains why each system received its classification, what evidence was considered, and what assumptions were made.

Gap Analysis (Against Articles 8-15)

For each high-risk system, the Calibrate stage must produce a gap analysis comparing current governance practices against each applicable requirement:

RequirementArticleGap Assessment Questions
Risk managementArt. 9Is there a documented, continuous, iterative risk management system? Does it cover intended use and reasonably foreseeable misuse?
Data governanceArt. 10Are training, validation, and testing data sets subject to documented governance? Is bias assessed?
Technical documentationArt. 11, Annex IVDoes documentation exist covering all Annex IV elements? Is it up to date?
Record-keepingArt. 12Does the system automatically log events during operation? Are logs retained appropriately?
TransparencyArt. 13Are instructions for use available to deployers? Do they cover all required information elements?
Human oversightArt. 14Are human oversight mechanisms implemented? Can overseers interpret outputs, override decisions, and interrupt operation?
Accuracy and robustnessArt. 15Are accuracy metrics declared? Is robustness tested? Are cybersecurity measures in place?

The gap analysis output is a prioritised remediation plan that feeds into the Organize and Model stages.

Organize Stage → Governance Structures and Roles

The Organize stage builds the human and organisational infrastructure for compliance:

Governance Committee Establishment

The EU AI Act does not prescribe a specific governance structure, but the obligations it imposes require cross-functional coordination that a governance committee naturally provides. The committee should include:

  • Executive sponsor with decision-making authority
  • Legal counsel with EU AI Act expertise
  • Data protection officer (where the organisation has one)
  • AI/ML engineering representation
  • Business unit representatives for high-risk system areas
  • Risk management and compliance representation

Role Assignment per Article 16 and 26

For each AI system, clearly assign who holds provider obligations (Article 16) and who holds deployer obligations (Article 26). In many organisations, the same entity is both provider and deployer for internally developed systems. The practitioner must ensure that each obligation has a named individual responsible for its fulfilment.

Training Programme Design

Article 4 requires that providers and deployers ensure that their staff and other persons dealing with the operation and use of AI systems on their behalf have a sufficient level of AI literacy. The training programme should be tiered:

  • All staff interacting with AI: Basic AI literacy covering what AI is, how it works, its limitations, and the organisation’s AI policies
  • AI system owners and operators: Role-specific training on their compliance obligations
  • Human overseers: Detailed training on the specific system they oversee, including how to interpret outputs, when to override, and when to escalate
  • Senior leadership: Strategic awareness of regulatory exposure, fiduciary implications, and governance responsibilities

Model Stage → Compliance Architecture Design

The Model stage designs the target compliance state:

Documentation Architecture

Design a documentation framework that produces all required documentation as a natural output of the development process rather than as a separate compliance exercise:

  • Technical documentation template aligned with Annex IV elements
  • Instructions for use template aligned with Article 13(3) information requirements
  • Risk management plan template aligned with Article 9
  • Data governance documentation template aligned with Article 10
  • Human oversight specification template aligned with Article 14

The goal is to integrate documentation production into the development lifecycle so that compliance documentation is always current and always reflects the actual state of the system.

Conformity Assessment Pathway Design

For each high-risk system, determine whether internal conformity assessment (Annex VI) or notified body assessment (Annex VII) applies. For internal assessment:

  • Ensure the organisation has personnel with appropriate competence and independence from the development team
  • Design the internal assessment protocol covering all applicable requirements
  • Define the assessment evidence requirements and acceptance criteria

For notified body assessment:

  • Research and pre-engage notified bodies early — capacity is limited
  • Prepare the QMS documentation package for external audit
  • Budget for notified body fees and timeline

Quality Management System Design

Article 17 requires a quality management system covering:

  • Strategy for regulatory compliance
  • Design, development, and testing techniques and procedures
  • Data management and data governance practices
  • Risk management processes
  • Post-market monitoring
  • Incident reporting and communication
  • Record-keeping and accountability

Where the organisation already has QMS certification (ISO 9001, ISO 13485, etc.), the practitioner should design an extension to cover AI-specific requirements rather than building a parallel system.

Produce Stage → Implementation

The Produce stage is where compliance moves from design to reality:

Technical Documentation Production

For each high-risk system, produce the Annex IV technical documentation package. The documentation must cover:

  1. A general description of the AI system (purpose, developer, version, how it interacts with hardware or software that is not part of the system itself)
  2. A detailed description of the elements of the AI system and of the process for its development (design specifications, system architecture, algorithms, data requirements, training methodology, evaluation techniques)
  3. Detailed information about the monitoring, functioning, and control of the system
  4. A description of the appropriateness of the performance metrics
  5. A detailed description of the risk management system
  6. A description of relevant changes made to the system through its lifecycle

Risk Management Implementation

Implement the continuous, iterative risk management system required by Article 9:

  • Identify and analyse known and reasonably foreseeable risks
  • Estimate and evaluate risks during intended use and foreseeable misuse
  • Evaluate risks arising from post-market monitoring data
  • Adopt appropriate risk management measures and test them for effectiveness
  • Give due consideration to the specific risks to children, persons with disabilities, and other vulnerable groups

Human Oversight Implementation

Implement the human oversight measures designed in the Model stage:

  • Build output interpretation aids that enable overseers to understand the system’s recommendations
  • Implement override and intervention mechanisms
  • Build stop/interrupt functionality accessible to authorised overseers
  • Implement monitoring alerts for anomalies, bias drift, and performance degradation
  • Document overseer qualifications, training requirements, and operational procedures

Evaluate Stage → Validation and Monitoring

The Evaluate stage validates that compliance measures are effective:

Mock Regulatory Inspection

Conduct a simulated regulatory inspection before the compliance deadline. This exercise should:

  • Test whether all required documentation can be located and presented within a reasonable timeframe
  • Verify that documentation is accurate, complete, and consistent
  • Test incident reporting procedures end-to-end
  • Verify that human oversight mechanisms function as documented
  • Assess staff awareness and competence through interviews

Continuous Monitoring

Implement post-market monitoring (Article 72) as an extension of the Evaluate stage:

  • Monitor system performance against declared accuracy metrics
  • Track and analyse user feedback and incident reports
  • Monitor for data drift, concept drift, and model degradation
  • Conduct periodic risk reassessment incorporating operational data
  • Report monitoring outcomes to the governance committee

Learn Stage → Continuous Improvement

The Learn stage captures knowledge and drives improvement:

Post-Market Monitoring Integration

The post-market monitoring plan (Article 72) is the regulatory analogue of the Learn stage’s continuous improvement process. Data collected through monitoring feeds back into risk management, documentation updates, and system improvements.

Regulatory Horizon Scanning

The EU AI Act will evolve through delegated acts, implementing acts, codes of practice, and harmonised standards. The Learn stage should include systematic monitoring of:

  • Delegated acts updating Annex III high-risk categories
  • Harmonised standards and common specifications adopted by the Commission
  • AI Office guidance and codes of practice for GPAI models
  • National competent authority enforcement decisions and guidance

Common Implementation Patterns

Pattern 1: The Compliance Sprint

For organisations with an approaching deadline and limited existing governance, the compliance sprint compresses the COMPEL cycle into an intensive programme:

  • Weeks 1-2: Calibrate (inventory, classify, gap analysis)
  • Weeks 3-4: Organize and Model (governance structures, documentation architecture)
  • Weeks 5-8: Produce (documentation, risk management, human oversight)
  • Weeks 9-10: Evaluate (mock inspection, remediation)
  • Ongoing: Learn (post-market monitoring, continuous improvement)

This pattern is detailed in the M3.4100-Day EU AI Act Readiness Using COMPEL (Module 3.4, Article 17).

Pattern 2: The Integration Approach

For organisations with an existing governance framework (ISO 42001, NIST AI RMF, internal AI governance policies), the integration approach maps existing controls to EU AI Act requirements and fills gaps:

  • Conduct a mapping exercise: which existing controls satisfy which EU AI Act requirements?
  • Identify gaps where EU AI Act requirements exceed existing controls
  • Prioritise gap remediation by deadline and risk
  • Extend existing documentation to cover EU AI Act specific elements (Annex IV, Article 13)
  • Leverage existing audit and monitoring infrastructure for post-market monitoring

Pattern 3: The Build-In Approach

For organisations developing new AI systems, the build-in approach integrates EU AI Act compliance into the development lifecycle from the outset:

  • Include classification assessment in the project initiation phase
  • Produce technical documentation as a living artefact updated throughout development
  • Implement risk management as a parallel workstream from day one
  • Design human oversight mechanisms as system requirements, not afterthoughts
  • Build logging and record-keeping capabilities into the system architecture

This is the most cost-effective approach and is the recommended pattern for all new AI system development.

Building the Compliance Evidence Chain

The EU AI Act is evidence-based regulation. Every compliance claim must be supported by documentary evidence that can be produced on request. The practitioner’s role is to ensure that the organisation’s governance activities produce a complete evidence chain:

Classification Decision
  └─ Supported by: Classification rationale document, Annex III analysis
    └─ Supported by: AI system inventory, purpose documentation
      └─ Supported by: Departmental survey responses, system specifications
Conformity Declaration
  └─ Supported by: Conformity assessment report
    └─ Supported by: Technical documentation (Annex IV)
      └─ Supported by: Risk management evidence, data governance evidence,
         accuracy testing evidence, human oversight evidence
        └─ Supported by: Test reports, audit logs, training records,
           monitoring data, incident records

Every piece of evidence must be:

  • Dated: When was it created or last updated?
  • Attributed: Who created it, reviewed it, and approved it?
  • Versioned: What version of the system does it apply to?
  • Accessible: Can it be located and presented within a reasonable timeframe?
  • Complete: Does it cover all the required elements without gaps?

The detailed methodology for building and maintaining evidence portfolios is covered in M2.6Building EU AI Act Evidence Portfolios (Module 2.6, Article 12).

Practitioner Checklist

Use this checklist to track your organisation’s compliance progress:

Calibrate

  • AI system inventory completed (including shadow AI and third-party systems)
  • All systems screened against Article 5 prohibited practices
  • Risk classification applied to all inventoried systems with documented rationale
  • Gap analysis completed for all high-risk systems against Articles 8-15
  • GPAI model usage identified and obligations assessed

Organize

  • AI governance committee established or extended
  • Compliance programme sponsor appointed at executive level
  • Provider/deployer role assignment completed for all systems
  • AI literacy training programme designed and initiated
  • Budget and resource allocation approved

Model

  • Documentation architecture designed and templates created
  • Conformity assessment pathway determined for each high-risk system
  • QMS enhancement plan approved
  • Human oversight mechanism specifications completed

Produce

  • Technical documentation completed for all high-risk systems
  • Risk management system implemented and documented
  • Human oversight mechanisms operational
  • Instructions for use produced for all high-risk systems
  • GPAI obligations fulfilled (documentation published)

Evaluate

  • Internal or notified body conformity assessment completed
  • Mock regulatory inspection conducted
  • Post-market monitoring system operational
  • All critical findings addressed and verified

Learn

  • Post-market monitoring data being collected and analysed
  • Regulatory horizon scanning process established
  • Lessons learned documented and integrated
  • Compliance programme transitioned to standing operations

This checklist represents the minimum compliance activities. The complexity and depth of each activity will vary based on the number and risk level of AI systems in the portfolio, the organisation’s existing governance maturity, and the specific requirements applicable to each system.

Moving to Advanced Implementation

Practitioners who have completed this article have the operational knowledge to drive EU AI Act compliance within their organisations using the COMPEL framework. The more advanced topics — detailed classification analysis, conformity assessment procedures, GPAI systemic risk management, penalty exposure analysis, and board-level governance — are covered in the Governance Professional (Level 3) and Leader (Level 4) articles in this series.

The EU AI Act is the most significant regulatory development in AI governance to date. Practitioners who can bridge the gap between regulatory requirements and operational implementation will be essential to their organisations’ compliance success. The COMPEL framework provides the structure; this article provides the map. The work of compliance is the work of governance — disciplined, evidence-based, and continuous.