This article walks Articles 9–15 in order and, for each, names the architecture controls that satisfy it. The architect who has read this article can sit in a conformity-assessment review and answer the notified body’s questions artifact by artifact. The architect who has not will face open-ended questions that an earlier, more defensible architecture decision would have closed.
Scope and preconditions
Articles 9–15 apply to high-risk AI systems as classified under Article 6 and Annex III (the eight high-risk use-case families: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, administration of justice).1 The architect’s first question on any new EU-market deployment is: is this system high-risk? Article 6’s classification deep-dive is covered in the Core Stream article EATE-Level-3/M3.4-Art14 and in the AITB-RCM credential; this article assumes the high-risk answer is yes.
The deadlines that matter to the architect: the Act’s transparency and GPAI obligations applied in 2025; most high-risk obligations apply from 2 August 2026 (some Annex III additions by 2027). Architects working now on new deployments should assume high-risk obligations are live by launch.2
Article 9 — Risk management system
What Article 9 requires
Article 9 requires a risk management system that is a continuous iterative process run throughout the entire lifecycle of a high-risk AI system. It must identify and analyse known and foreseeable risks, estimate and evaluate risks that may emerge when used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse, evaluate risks that emerge from the analysis of data gathered from the post-market monitoring system, and adopt appropriate risk management measures.
Architecture controls
- A documented risk register (the AITE-SAT artefact: the risk section of the ADR in Article 23 plus a system-level risk ledger).
- A threat model that enumerates known and foreseeable risks per OWASP LLM Top 10 and MITRE ATLAS (Article 14).
- A misuse catalogue that names reasonably foreseeable misuse patterns for the specific use case.
- A post-market monitoring design (SLIs, eval cadence, incident response per Article 20) whose outputs feed back into the register.
- Risk-treatment decisions recorded as ADRs with explicit acceptance, mitigation, or avoidance choice.
Article 10 — Data and data governance
What Article 10 requires
Article 10 covers training, validation, and testing datasets: relevance, representativeness, absence of errors, statistical properties, documentation of data collection processes, examination for possible biases, measures to detect and correct biases, identification of possible data gaps or shortcomings, data governance practices concerning the origin of data. For systems using personal data, compatibility with GDPR.
Architecture controls
- A data pipeline architecture (Article 15) that documents every stage from ingestion to index, with per-stage owners.
- Dataset cards (or equivalent) for each training, validation, and test set: origin, licence, collection process, representativeness analysis, bias-assessment results, known gaps.
- Retrieval-index registry entries (Article 21) that carry the corpus manifest and embedding provenance for RAG systems.
- PII classification and redaction controls in the ingest pipeline.
- A bias testing harness (see
EATF-Level-1/M1.5-Art06-AI-Ethics-Operationalized) whose outputs attach to each dataset version.
Article 11 — Technical documentation
What Article 11 requires
Technical documentation must be drawn up before the high-risk AI system is placed on the market or put into service, kept up to date, and demonstrate compliance. Annex IV specifies the minimum content: system description, intended purpose, version, developer details, human oversight measures, data used, validation and testing procedures, cybersecurity measures, and much more.
Architecture controls
- The system card is the primary architectural deliverable. It extends Google’s Model Cards for Model Reporting pattern to the full system.3
- Reference architecture diagram (Article 1, 35) with labelled data flows.
- ADR corpus (Article 23) covering every material architectural decision.
- Data flow diagrams covering training, validation, testing, and inference-time retrieval flows.
- Cybersecurity evidence: threat model, penetration-test results, red-team results (Article 14).
- The release manifest (Article 21) that pins which versions of everything were in production at the time of the documentation snapshot.
The technical documentation is auditable. The architect’s job is to make it produce-able from the platform’s artefacts automatically, not assemble it manually under deadline pressure.
Article 12 — Record-keeping
What Article 12 requires
High-risk AI systems shall technically allow for the automatic recording of events (logs) over the lifetime of the system, to ensure a level of traceability appropriate to the intended purpose. Logs must enable monitoring of operation, identification of situations that may result in risks, and support post-market monitoring.
Architecture controls
- Prompt/response capture with configurable retention (Article 13 observability).
- Trace capture across the full request path (client, orchestration, retrieval, model, tool).
- Registry event log (every promotion, every rollback, every registry mutation — Article 21).
- Incident log from the SLO/SLI and incident response system (Article 20).
- Cost and latency telemetry (Article 17) with anomaly detection.
- Residency-preserving log storage (Article 18).
Log retention periods are set by the deployer and are typically at least 6 months; certain sector rules (EU banking, pharmaceuticals) extend retention to years. The architect sizes observability storage for the retention budget and residency requirement.
Article 13 — Transparency and provision of information to deployers
What Article 13 requires
High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable deployers to interpret a system’s output and use it appropriately. Transparency includes provision of instructions for use, specification of the characteristics, capabilities, and limitations of performance.
Architecture controls
- Model card and system card (Articles 11, 21) made available to deployers.
- Confidence disclosure at the UX layer where the system output is used for decision support.
- Explanation surfaces for high-impact outputs: citations for RAG, contributing-feature explanations where feasible, refusal-reason messages.
- Deployer-facing documentation: integration spec, SLA, incident-contact path, change notification procedure.
- Version-disclosure to deployers when a material change is promoted (model upgrade, prompt change, corpus replacement).
Article 13 is often where user-experience design meets architecture — the architect specifies the information the UX must expose; the UX designer figures out how.
Article 14 — Human oversight
What Article 14 requires
High-risk AI systems shall be designed and developed in such a way that they can be effectively overseen by natural persons during the period in which the AI system is in use. Measures include enabling the persons to fully understand the capabilities and limitations, to remain aware of the possible tendency of automation bias, to correctly interpret the system’s output, to decide not to use the output or override it, to intervene, and to interrupt the system via a “stop” button.
Architecture controls
- Human-in-the-loop escalation paths for identified case classes (Article 31 responsible-AI patterns).
- A kill-switch (Article 20) that is reachable and periodically tested.
- UX affordances to override, dispute, or escalate outputs.
- Automation-bias mitigations: confidence display, friction on high-impact decisions, mandatory human review for certain categories.
- Training and procedures for the humans who oversee the system (articulated by the deployer; the architect specifies the interface the human uses).
- Operational dashboards for oversight staff.
Article 15 — Accuracy, robustness, and cybersecurity
What Article 15 requires
High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracy, robustness, and cybersecurity, and perform consistently in those respects throughout their lifecycle. Levels of accuracy and relevant accuracy metrics shall be declared in the accompanying instructions of use.
Architecture controls
- Declared accuracy targets and the eval harness (Article 11) that measures them.
- Robustness to adversarial inputs: OWASP LLM Top 10 defences, MITRE ATLAS threats (Article 14).
- Robustness to distribution drift: drift detection and re-eval cadence (Article 11, 20).
- Cybersecurity controls: input sanitisation, PII redaction, output filtering, supply-chain-security for model weights and third-party services.
- Recovery and fallback: safe degradation, kill-switch, rollback via promotion pipeline (Article 19).
- Reproducibility: given a release manifest, the system’s behaviour can be recreated.
The conformity-assessment evidence pack
The high-risk AI system undergoes a conformity assessment before market placement. Article 43 sets the pathways — either internal control (Annex VI) or notified body assessment (Annex VII) depending on the Annex III category. The evidence pack assembled for the assessment draws from AITE-SAT artefacts:
- Article 9 evidence: risk register, threat model, post-market monitoring plan.
- Article 10 evidence: dataset cards, data pipeline diagram, bias-testing reports.
- Article 11 evidence: system card, ADR corpus, architecture diagram, release manifest archive.
- Article 12 evidence: log architecture spec, retention policy, sample log export.
- Article 13 evidence: deployer-facing documentation, UX specification.
- Article 14 evidence: oversight interface spec, kill-switch spec, override log.
- Article 15 evidence: eval harness spec, accuracy declaration, robustness test results, cybersecurity test results.
The capstone in Article 35 composes a complete evidence pack for a worked example (EU-high-risk HR screening assistant) and demonstrates the full tracing end to end.
Enforcement — what we know and don’t
As of early 2026 the EU AI Act is in force but most high-risk obligations apply from 2 August 2026. The first enforcement cases are still being built. The Italian Garante’s ChatGPT decision (provisional measures in 2023, €15M fine in December 2024 under GDPR) and the early AI Office guidance on Article 6 and GPAI obligations (2024–2025) are the most useful precedents.4 Architects should assume that the EU Commission AI Office and national market-surveillance authorities will look for documentation deficiencies first — Articles 11 and 12 evidence gaps are the easiest to find and cite.
The ICO AI Auditing Framework (UK), the Bank of England Prudential Regulation Authority SS1/23 model-risk supervisory statement, and the Monetary Authority of Singapore’s FEAT principles provide parallel enforcement patterns that a global architect watches.5
Anti-patterns
- Treating Articles 9–15 as a checklist at launch. The obligations are lifecycle-long; a checklist satisfied at launch will not survive the first material change. The architect builds automation that keeps the evidence current.
- Assuming GDPR compliance equals AI Act compliance. The two overlap but are not congruent. Article 10’s bias and representativeness expectations are not GDPR concepts; Article 14’s human oversight is broader than GDPR Article 22.
- Deferring to “legal.” Legal confirms interpretations and signs off on filings; they do not build the architecture that produces the evidence. That is the architect’s job.
- Reliance on vendor responsibility documents. A hyperscaler’s or model provider’s responsibility matrix is informative but not determinative. The deployer — the architect’s employer — is the Article 25 operator of the system and carries the obligations.
Summary
Articles 9–15 of the EU AI Act describe an architecture, not a compliance programme. The architect reads them as a specification and produces the controls, documentation, and evidence they require. The AITE-SAT curriculum gives the architect the artefacts — reference architecture, ADRs, eval harness, registries, SLOs, observability, incident response — that together answer every question a notified body will ask.
Key terms
Conformity assessment (EU AI Act) High-risk AI system Technical documentation Post-market monitoring Human oversight
Learning outcomes
After this article the learner can: explain EU AI Act Articles 9–15 at architect depth; classify ten architectural controls against the seven articles; evaluate a deployment’s evidence pack against the articles; design a conformity-assessment evidence pack for an Annex III use case.
Further reading
Footnotes
-
Regulation (EU) 2024/1689 (AI Act), Article 6 and Annex III. ↩
-
Regulation (EU) 2024/1689 (AI Act), Article 113 (application timeline). ↩
-
Mitchell et al., “Model Cards for Model Reporting,” FAT* 2019. ↩
-
Italian Garante ChatGPT decisions; European Commission AI Office draft guidance (2024–2025). ↩
-
ICO AI Auditing Framework (UK); Bank of England PRA SS1/23; MAS FEAT principles. ↩