Skip to main content
AITF M1.27-Art01 v1.0 Reviewed 2026-04-06 Open Access
M1.27 M1.27
AITF · Foundations

AI Conformity Assessment under EU AI Act

AI Conformity Assessment under EU AI Act — AI Use Case Management — Foundation depth — COMPEL Body of Knowledge.

7 min read Article 1 of 4

This article describes the procedural pathway, the evidence package required, the role of harmonised standards and notified bodies, and the practical implications for organisations preparing for first conformity submission.

The Scope of Conformity Assessment

The EU AI Act Article 43 at https://artificialintelligenceact.eu/article/43/ defines the conformity assessment procedures applicable to high-risk AI systems. Two procedures dominate.

The first is the internal control procedure under Annex VI, in which the provider self-assesses against the Act’s requirements without mandatory external review. This procedure is available for most high-risk AI systems listed in Annex III (employment, education, law enforcement, etc.) when the provider follows harmonised standards or common specifications.

The second is the notified body assessment procedure under Annex VII, which involves external review by an EU-designated notified body. This procedure is required for certain high-risk systems used in safety components of regulated products (machinery, medical devices, toys, etc.) or where the provider does not apply harmonised standards.

The choice of procedure has substantial cost and timing implications. Notified body assessment can take six to twelve months and incur six-figure fees. Internal control is faster and cheaper but requires deep evidence preparation.

Articles 9 through 15 of the Act, available collectively at https://artificialintelligenceact.eu/, set out the substantive requirements that conformity assessment must address. Each article deserves dedicated preparation.

The Evidence Package

A defensible conformity assessment relies on a structured evidence package that maps Act requirements to internal artefacts. The package typically includes:

Risk Management System (Article 9)

Evidence of an established, documented, maintained risk management process across the AI system lifecycle. The process must identify and analyse foreseeable risks, estimate and evaluate risks emerging in intended use and reasonably foreseeable misuse, and adopt risk management measures.

Practically, this includes the risk register, the heat map (Module 1.21), the risk treatment plan, the residual risk analysis, and the post-market monitoring plan. The Carnegie Mellon Software Engineering Institute Risk Management Process at https://insights.sei.cmu.edu/library/risk-management-process/ provides reusable structure.

Data Governance (Article 10)

Evidence that training, validation, and testing datasets meet quality criteria appropriate to their use. This includes data preparation processes, data examination for biases that may affect health, safety, or fundamental rights, and the identification of any data gaps or shortcomings.

The datasheets-for-datasets practice (Module 1.23) is the natural home for this evidence.

Technical Documentation (Article 11)

A comprehensive technical file aligned with Annex IV. The file must enable national competent authorities and notified bodies to assess compliance. Required sections include system description, design specifications, system architecture, training data composition, validation and testing procedures, performance metrics, human oversight measures, and post-market monitoring.

Record-Keeping (Article 12)

Evidence that the system automatically records events relevant to identification of risks and operational monitoring. This is the audit trail discussed in Module 1.21.

Transparency to Deployers (Article 13)

Evidence that the system is accompanied by instructions for use that enable deployers to interpret the output and use it appropriately. The instructions must include the system’s intended purpose, accuracy and robustness levels, foreseeable risks, performance characteristics across relevant subgroups, computational resources required, and human oversight measures.

The model card (Module 1.23) is the foundational artefact, expanded with deployer-specific instructions.

Human Oversight (Article 14)

Evidence of designed-in human oversight measures. This goes beyond the assertion that humans can review outputs; it requires designed mechanisms that enable oversight authorities to understand outputs, decide whether to use them, override them, and stop or reverse the system.

Accuracy, Robustness, Cybersecurity (Article 15)

Evidence of accuracy levels, robustness against errors and inconsistencies, and resilience against attempts to alter behaviour through malicious input. This connects to the acceptance testing of Module 1.25, with specific cybersecurity considerations including adversarial robustness.

The Role of Harmonised Standards

The Act envisions harmonised European standards that, when followed, create a presumption of conformity. The European Committee for Standardization (CEN) and the European Committee for Electrotechnical Standardization (CENELEC) joint technical committee CEN-CLC/JTC 21 has been developing these standards under European Commission mandate. Drafts and adopted standards are tracked at https://standards.cencenelec.eu/.

Several standards relevant to AI conformity have been adopted or are advanced:

Adopting these standards systematically reduces conformity assessment burden because the standards themselves provide structure and the conformity argument can reference standard compliance.

Notified Bodies

Notified bodies are conformity assessment bodies designated by EU Member States and notified to the European Commission. The list is published at https://ec.europa.eu/growth/tools-databases/nando/. As of the early operational phase of the AI Act, the population of notified bodies designated specifically for AI conformity is small, creating capacity constraints that organisations should anticipate in their planning.

When notified body involvement is required, the engagement typically follows the pattern: scope agreement, document submission, on-site assessment, finding response, decision, and ongoing surveillance. The full cycle can be six to twelve months and budgeting accordingly is essential.

Post-Conformity Obligations

Conformity assessment is not a one-time event. Several obligations continue throughout the system’s life.

Substantial modification triggers re-assessment. Material changes to system function or risk profile require renewed conformity assessment. The Act provides limited guidance on what constitutes substantial modification; conservative interpretation is recommended.

Post-market monitoring. Article 72 requires providers to monitor system performance after market placement and to take corrective action when issues emerge.

Serious incident reporting. Article 73 requires providers to report serious incidents to relevant national authorities within defined windows.

Database registration. Article 71 requires registration of high-risk AI systems in a public EU database.

The Office of the European Data Protection Supervisor at https://www.edps.europa.eu/ has issued opinions interpreting how the AI Act interfaces with the General Data Protection Regulation; conformity preparation must address both regimes coherently.

Operational Preparation

Organisations preparing for conformity assessment for the first time should plan for an 18-to-24-month preparation cycle for the first system. Subsequent systems can be much faster as the organisational machinery matures.

The major preparation workstreams are:

  • Inventory and classification: identifying which systems fall under the Act’s scope and into which risk tier.
  • Gap analysis: comparing current evidence against the Act’s requirements and harmonised standards.
  • Remediation: building the missing evidence (often the largest workstream, particularly around testing and documentation).
  • Internal review: a dry-run conformity review by an independent internal team.
  • External review (where applicable): engagement of the notified body and response to findings.
  • Submission and surveillance setup: filing, registration, and post-market monitoring activation.

Common Failure Modes

The first is waiting for clarity. Some Act provisions remain ambiguous and full guidance is being developed. Organisations that wait for full clarity will miss the timelines. Counter by preparing under reasonable interpretations and planning to refine.

The second is under-resourcing the documentation workstream. The evidence package is large and requires precise drafting. Counter with a dedicated documentation lead and templates.

The third is isolation from existing compliance. AI conformity should leverage existing GDPR, sector-specific, and quality management investments rather than running parallel. Counter with cross-mapped evidence inventories.

The fourth is static conformity. Preparation focuses on getting the first declaration but not on the continuing obligations. Counter by building post-market monitoring infrastructure before, not after, the declaration.

Looking Forward

The next article in Module 1.27 turns to the broader pattern of regulatory submission preparation across multiple regimes. Conformity under the EU AI Act is one major instance; the same disciplines apply to FDA submissions for medical AI, banking regulator filings for credit AI, and the emerging mosaic of national AI laws.


© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.