Skip to main content

COMPEL Glossary / human-oversight-art-14

Human oversight (Art. 14)

Under Regulation (EU) 2024/1689, the provider-designed measures that allow natural persons to understand the capacities and limitations of a high-risk AI system, monitor its operation, and intervene or interrupt it.

What this means in practice

Distinct from the general human-in-the-loop concept because Article 14 specifies provider obligations rather than deployer patterns.

Synonyms

EU AI Act human oversight , Article 14 human oversight

See also

  • High-risk AI system — Under Regulation (EU) 2024/1689, an AI system falling under Article 6(1) because it is a safety component of, or is itself, a product covered by Annex I Union harmonization legislation, or under Article 6(2) because its use case falls within Annex III — unless exempted by the Article 6(3) derogation..
  • Provider — Under Regulation (EU) 2024/1689, a natural or legal person, public authority, agency, or other body that develops an AI system or a general-purpose AI model — or has it developed — and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge..
  • Deployer — Under Regulation (EU) 2024/1689, a natural or legal person, public authority, agency, or other body using an AI system under its authority — except where the AI system is used in the course of a personal non-professional activity..
  • Risk management system (Art. 9) — Under Regulation (EU) 2024/1689, a continuous, iterative process running across the entire lifecycle of a high-risk AI system that identifies foreseeable risks, estimates and evaluates risks, adopts risk-management measures, and documents residual risks.