The COMPEL Glossary Graph visualizes relationships between framework terminology, showing how concepts interconnect across domains, stages, and pillars. Term nodes cluster by pillar affiliation while cross-references reveal semantic dependencies — for example, how risk appetite connects to control effectiveness, model governance, and assurance requirements. This network representation helps practitioners navigate the framework vocabulary and understand that COMPEL terminology forms a coherent conceptual system rather than isolated definitions.
COMPEL Glossary / gpai-general-purpose-ai-model
GPAI (general-purpose AI model)
Under Regulation (EU) 2024/1689, an AI model — including where trained with a large amount of data using self-supervision at scale — that displays significant generality and can competently perform a wide range of distinct tasks, regardless of how it is placed on the market.
Synonyms
general-purpose AI model , GPAI model , foundation model (EU AI Act)
See also
- GPAI model with systemic risk — Under Regulation (EU) 2024/1689, a general-purpose AI model classified as having systemic risk because it meets or exceeds high-impact capability thresholds — the initial benchmark being cumulative training compute greater than 10^25 FLOPs — or because it is designated as such by the Commission..
- Provider — Under Regulation (EU) 2024/1689, a natural or legal person, public authority, agency, or other body that develops an AI system or a general-purpose AI model — or has it developed — and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge..
- High-risk AI system — Under Regulation (EU) 2024/1689, an AI system falling under Article 6(1) because it is a safety component of, or is itself, a product covered by Annex I Union harmonization legislation, or under Article 6(2) because its use case falls within Annex III — unless exempted by the Article 6(3) derogation..
- Transparency duty (Art. 50) — Under Regulation (EU) 2024/1689, specific transparency obligations: providers of AI systems interacting with natural persons must disclose that fact; providers of emotion-recognition or biometric-categorisation systems must inform affected persons; deepfake or AI-generated content intended to inform the public must be marked as synthetic..