For governance professionals, GPAI compliance presents unique challenges: the obligations apply to the model itself (not to a specific deployment), the systemic risk threshold introduces a novel risk category, and the energy consumption reporting requirement brings sustainability into the regulatory framework. This article provides the detailed analysis needed to design and implement a GPAI compliance programme.
Understanding the GPAI Regulatory Architecture
What Is a GPAI Model?
Article 3(63) defines a GPAI model as “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.”
The key characteristics are:
- Significant generality: The model is not designed for a single task
- Competent performance across diverse tasks: The model can perform well in multiple domains
- Integrability: The model can be incorporated into downstream systems
This definition captures large language models (GPT, Claude, Gemini, Llama, Mistral), large vision models, multimodal models, and potentially large code generation models. It may also capture smaller models that nonetheless demonstrate significant generality.
Provider vs. Deployer in the GPAI Context
The GPAI regulatory framework creates a layered obligation structure:
GPAI Model Provider (the entity that develops and provides the model):
- Bears primary obligations under Articles 53-55
- Must provide documentation and information to downstream providers
- Must monitor for systemic risks (if applicable)
- Must cooperate with the AI Office
Downstream AI System Provider (the entity that integrates the GPAI model into a specific AI system):
- Bears obligations for the AI system under the standard risk classification framework
- Relies on the GPAI model provider’s documentation for their own compliance
- Must monitor the model’s behaviour within their specific deployment context
Deployer (the entity that uses the AI system):
- Bears deployer obligations under Article 26
- May not have direct obligations related to the GPAI model itself
This layered structure means that compliance flows through the supply chain. If a GPAI model provider fails to provide adequate documentation, downstream providers face compliance challenges for their own AI systems.
Standard GPAI Obligations (Article 53)
All GPAI model providers — regardless of systemic risk classification — must comply with the following obligations. These took effect on 2 August 2025.
Technical Documentation (Article 53(1)(a))
Providers must draw up and keep up to date technical documentation in accordance with Annex XI. The documentation must be provided to the AI Office and national competent authorities upon request.
Annex XI specifies the following minimum content:
- Information on the model (name, version, date, description of architecture)
- Description of the model’s intended tasks and type of systems it can be integrated into
- The acceptable use policy applicable to the model
- The date of release and methods of distribution
- Architecture and relevant computational aspects including model size, context window, and parameter count
- Information on the data used for training, including a description of the main data collection and curation methodologies
- Details on hardware and software used for training, including precision and computational resources
- Information on known or estimated energy consumption of the model
- Known limitations of the model
Governance professional implementation guidance:
The technical documentation requirement is substantial but can be partially automated. Model training pipelines should be instrumented to capture metadata (hyperparameters, training duration, hardware utilisation, energy consumption) automatically. Model cards (as proposed by Mitchell et al., 2019) provide a useful format, but must be extended to cover all Annex XI elements.
Maintain documentation as a living artefact. The requirement to “keep up to date” means that significant model updates (fine-tuning, additional training, architecture changes) must be reflected in updated documentation.
Downstream Provider Information (Article 53(1)(b))
Providers must provide information and documentation to downstream AI system providers to enable them to understand the model’s capabilities and limitations and comply with their own obligations.
This information must include:
- The model’s intended and foreseeable uses
- Known capabilities and limitations
- Integration guidance and best practices
- Performance characteristics per use case or domain
- Known risks and recommended mitigations
- Guidance on responsible fine-tuning and adaptation
Governance professional implementation guidance:
This obligation creates a contractual and documentation interface between the GPAI model provider and downstream providers. Governance professionals should ensure that:
- Downstream provider documentation packages are standardised and version-controlled
- Documentation is updated when the model is updated
- A process exists for downstream providers to request additional information
- Contractual arrangements with downstream providers include documentation obligations
Copyright Compliance Policy (Article 53(1)(c))
Providers must put in place a policy to comply with Union copyright law, specifically the Digital Single Market Directive (EU) 2019/790.
Key elements of the copyright policy:
- A process to identify and respect rightsholders’ reservations of rights (opt-outs from text and data mining under Article 4(3) of the DSM Directive)
- Technology implementation to detect and process opt-out signals (robots.txt, machine-readable declarations)
- A publicly available contact point for rightsholders
- A complaint handling process for rightsholders
- An audit trail of opt-out processing
This obligation is technically challenging because opt-out declarations may not have been captured at the time training data was collected. Governance professionals should work with legal counsel to develop a pragmatic policy that demonstrates good faith compliance, documents the technology used for opt-out detection, and establishes remediation procedures.
Training Data Summary (Article 53(1)(d))
Providers must publish a sufficiently detailed summary of training data according to a template provided by the AI Office.
Key considerations:
- The summary must be “sufficiently comprehensive to provide a meaningful understanding” — vague generalities are insufficient
- It must not reveal trade secrets or specific proprietary data
- The AI Office template provides the minimum structure
- The summary must be publicly available (not just available to authorities)
This requirement creates a tension between transparency and competitive sensitivity. The governance professional’s role is to ensure that the published summary is genuinely informative while protecting legitimate trade secrets. The standard for “sufficiently detailed” will likely be clarified through AI Office guidance and, eventually, enforcement practice.
Energy Consumption Reporting (Article 53(1)(e))
Providers must make publicly available information on the energy consumption of the model. This is a pioneering regulatory requirement that brings AI sustainability into the compliance framework.
Reporting should cover:
- Total energy consumed during training (kWh)
- Computational resources used (GPU type, quantity, training duration)
- Energy source mix and data centre location (where known)
- Estimated carbon emissions (using appropriate methodology)
- Per-inference energy consumption (where measurable or estimable)
- Methodology used for measurement or estimation
Governance professional implementation guidance:
Energy consumption reporting requires instrumentation that many organisations do not currently have. Key implementation steps:
- Instrument training infrastructure to capture energy consumption data at the job level
- Work with cloud providers to obtain energy consumption data for cloud-based training
- Adopt a recognised methodology for carbon emissions calculation (GHG Protocol, PUE-based estimation)
- Establish baseline measurements for comparison across model versions
- Integrate energy reporting into the model documentation lifecycle
The energy consumption requirement also aligns with the EU Corporate Sustainability Reporting Directive (CSRD), creating synergies for organisations that are already subject to environmental reporting obligations.
Systemic Risk Classification (Article 51)
The FLOP Threshold
A GPAI model is presumed to have systemic risk when the cumulative amount of computation used for its training, measured in floating point operations (FLOPs), is greater than 10^25.
Contextualising the threshold:
As of the regulation’s adoption, models exceeding 10^25 FLOPs include the largest frontier models from leading AI laboratories. The threshold is intentionally set to capture only the most capable models — those whose broad capabilities and widespread deployment create risks at a systemic (EU-wide) level.
The threshold will be updated by the Commission through delegated acts to reflect technological developments. As training efficiency improves and larger models become more common, the threshold may be adjusted.
Measuring FLOPs:
Computing cumulative training FLOPs requires:
- Recording the total number of floating point operations performed during all training runs (including failed and exploratory runs)
- Including compute for pre-training, fine-tuning, and reinforcement learning from human feedback (RLHF) phases
- Using a consistent measurement methodology documented in the technical documentation
Commission Designation
The Commission may designate a GPAI model as having systemic risk based on criteria beyond FLOPs (Article 51(2)):
- Number of parameters
- Quality and size of the training dataset
- Number of registered business users and end users
- Input and output modalities (multimodality increases risk)
- Benchmarks and state-of-the-art capabilities
- Cross-border reach and potential impact
This discretionary designation power means that models below the FLOP threshold can still be classified as systemic risk if their capabilities and reach warrant it.
Enhanced Obligations for Systemic Risk Models (Article 55)
In addition to all standard GPAI obligations, providers of models with systemic risk must comply with enhanced requirements:
Model Evaluation (Article 55(1)(a))
Providers must perform model evaluation in accordance with standardised protocols and tools, including adversarial testing. The evaluation must assess:
- Capabilities: What can the model do across diverse domains?
- Harmful content generation: Can the model generate content that facilitates harm?
- CBRN risks: Can the model provide meaningful assistance in creating chemical, biological, radiological, or nuclear threats?
- Cyber offence capabilities: Can the model assist in cyberattack development?
- Adversarial robustness: How does the model respond to adversarial inputs?
- Manipulation potential: Can the model be used to manipulate or deceive at scale?
Implementation guidance:
Evaluation should follow standardised protocols. The AI Office is developing codes of practice that will provide specific guidance. In the interim, governance professionals should:
- Adopt evaluation frameworks such as the UK AI Safety Institute’s evaluation protocols, NIST’s evaluation methodologies, or the MLCommons AI Safety benchmark
- Engage external red-teaming organisations for independent adversarial assessment
- Document evaluation methodology, results, and remediation actions
- Repeat evaluations when the model is updated or when new risks are identified
Systemic Risk Assessment and Mitigation (Article 55(1)(b))
Providers must assess and mitigate possible systemic risks at Union level. Systemic risks may arise from:
- The model’s capabilities (what it can do)
- The model’s reach (how many systems and users it affects)
- The model’s use patterns (how it is actually deployed)
Implementation guidance:
Systemic risk assessment should consider:
- What would happen if this model were simultaneously misused across its entire deployment footprint?
- What systemic effects could arise from widespread reliance on this model’s outputs?
- What would happen if this model produced systematically biased or incorrect outputs at scale?
- What risks arise from the model’s interaction with other AI systems in multi-model architectures?
Cybersecurity Protection (Article 55(1)(c))
Providers must ensure adequate cybersecurity for both the model and its physical infrastructure. This covers:
- Model weight protection (access controls, encryption at rest and in transit)
- Training data security
- Defence against adversarial attacks (prompt injection, data poisoning, model extraction, model inversion)
- Infrastructure security (penetration testing, access management, monitoring)
- Supply chain security (dependencies, cloud provider security)
Incident Reporting (Article 55(1)(d))
Providers must track, document, and report serious incidents to the AI Office and national authorities. A serious incident is one that indicates risk to health, safety, fundamental rights, the environment, or democratic processes at Union level.
Implementation guidance:
Establish a monitoring and incident response system that:
- Continuously monitors model behaviour for anomalies
- Classifies incidents by severity against defined criteria
- Automatically alerts the response team for incidents above the reporting threshold
- Produces incident reports in the format required by the AI Office
- Maintains a complete incident log with investigation records
- Conducts root cause analysis and implements corrective actions
Downstream Deployer Obligations
Organisations that deploy GPAI-based AI systems face specific considerations:
Due diligence on the GPAI model provider:
- Has the provider published the required training data summary?
- Has the provider provided adequate downstream documentation?
- Is the provider complying with energy consumption reporting?
- For systemic risk models: is the provider conducting evaluations and adversarial testing?
Integration risk management:
- The deployer must assess risks introduced by integrating the GPAI model into their specific system
- The GPAI model provider’s risk assessment covers model-level risks; the deployer must assess deployment-specific risks
- Fine-tuning or adaptation of the model may introduce new risks not covered by the provider’s assessment
Monitoring obligations:
- Monitor the model’s behaviour within the specific deployment context
- Report anomalies or incidents to the GPAI model provider
- Maintain logs and monitoring data
COMPEL Stage Alignment
GPAI compliance maps across the full COMPEL cycle:
| COMPEL Stage | GPAI Compliance Activities |
|---|---|
| Calibrate | Identify GPAI models in use, assess systemic risk threshold, establish compliance baseline |
| Organize | Designate GPAI compliance lead, establish AI Office liaison, design copyright policy governance |
| Model | Design evaluation protocols, adversarial testing programme, energy measurement methodology |
| Produce | Execute evaluations, produce documentation, publish training data summary and energy data |
| Evaluate | Review evaluation results, validate cybersecurity measures, assess risk mitigation effectiveness |
| Learn | Integrate evaluation findings, update models and documentation, improve energy efficiency |
The GPAI compliance programme should be embedded within the broader COMPEL governance cycle, not operated as a separate workstream. The governance committee should review GPAI compliance alongside high-risk system compliance, ensuring consistent governance across the organisation’s entire AI portfolio.
Looking Forward
The GPAI regulatory landscape will continue to evolve. The AI Office is developing codes of practice for GPAI model providers, and harmonised standards specifically for GPAI are in development. Governance professionals should:
- Monitor AI Office publications and guidance
- Participate in industry consultations on codes of practice
- Track the FLOP threshold for potential updates
- Assess new models against the systemic risk criteria before deployment
- Build adaptability into the compliance programme to accommodate regulatory evolution
The GPAI obligations represent the EU AI Act’s most forward-looking provisions — they address the AI models that are currently transforming every industry and every use case. Governance professionals who master these obligations position their organisations not just for compliance, but for responsible and sustainable use of the most powerful AI technologies available.