Governance Control Catalog
54 controls across 8 categories with stage alignment, evidence requirements, and remediation guidance.
AI Strategy Alignment Review
Verify that all AI initiatives align with the enterprise AI strategy document and business objectives. Assess strategic fit using defined prioritization criteria.
Implementation details
Implementation Guidance
Establish a quarterly strategy alignment review board. Each AI initiative must reference at least one strategic objective from the AI charter. Use the AI Use Case Prioritization Matrix to score alignment.
Evidence Types
Strategy alignment matrix, Board approval minutes, Risk appetite statement
Audit Frequency
quarterly
Remediation
Pause misaligned initiatives. Conduct realignment workshop within 15 business days. Require executive sponsor re-approval before resuming.
Executive AI Charter Maintenance
Ensure the Executive AI Charter is current, board-approved, and communicated across the organization. The charter defines AI vision, boundaries, and investment principles.
Implementation details
Implementation Guidance
Review the charter annually or upon significant strategic shifts. Obtain board-level sign-off. Distribute to all department heads and make available on the governance intranet.
Evidence Types
Sponsorship charter, Succession plan, Decision authority matrix
Audit Frequency
annual
Remediation
If charter review is overdue by more than 30 days, escalate to board level. Appoint interim governance authority within 5 business days.
AI Investment Portfolio Governance
Oversee the AI investment portfolio to ensure balanced risk-return profiles, prevent over-concentration in single use cases, and align spending with strategic priorities.
Implementation details
Implementation Guidance
Maintain a live AI investment dashboard. Conduct monthly portfolio reviews comparing planned vs. actual spend. Flag initiatives exceeding budget thresholds by more than 15%.
Evidence Types
Value thesis register, ROI tracking reports, Investment committee minutes
Audit Frequency
quarterly
Remediation
Use cases with negative ROI after 90 days require executive review. Document continue/pivot/retire decision with rationale.
Stakeholder Engagement Governance
Ensure systematic engagement of all stakeholder groups affected by AI initiatives, including internal teams, customers, regulators, and communities.
Implementation details
Implementation Guidance
Map stakeholders using the COMPEL stakeholder mapping tool at initiative start. Schedule engagement activities per stakeholder group. Document engagement outcomes and concerns raised.
Evidence Types
Stakeholder map, Engagement plan, Feedback log
Audit Frequency
quarterly
Remediation
If stakeholder engagement score drops below 50%, escalate to CoE Lead. Conduct emergency stakeholder outreach within 5 business days.
AI Ambition and Vision Communication
Validate that the organizational AI ambition statement is regularly communicated and understood across all business units, measured through awareness surveys.
Implementation details
Implementation Guidance
Distribute the AI Ambition Statement during onboarding and quarterly town halls. Measure understanding through pulse surveys targeting at least 70% awareness.
Evidence Types
AI ambition statement, Survey results, Distribution records
Audit Frequency
annual
Remediation
Awareness below 50% triggers mandatory re-communication campaign. CoE Lead must present updated comms plan within 10 business days.
Board-Level AI Reporting
Provide regular, structured AI transformation reporting to the board with risk, value, and compliance summaries using standardized reporting templates.
Implementation details
Implementation Guidance
Deliver quarterly board report covering: AI portfolio status, risk posture, compliance state, value realization, and incident summary. Use standardized template.
Evidence Types
Board report, Risk summary dashboard, Compliance posture report
Audit Frequency
quarterly
Remediation
If board report is delayed beyond 10 business days, CoE Lead must provide interim status briefing to executive sponsor.
AI System Registration and Inventory
Maintain a comprehensive and current inventory of all AI systems, including shadow AI, third-party services, and embedded AI components within vendor products.
Implementation details
Implementation Guidance
Require registration of all AI systems before deployment. Conduct quarterly shadow AI discovery sweeps. Track system status, risk tier, owner, and review dates in the central registry.
Evidence Types
System registry export, Shadow AI scan report, Registration log
Audit Frequency
quarterly
Remediation
Unregistered systems in production require immediate registration or decommission within 10 business days.
Stage Gate Enforcement
Enforce stage gate reviews with mandatory checkpoints, evidence submission, and documented decisions before stage transitions.
Implementation details
Implementation Guidance
No stage transition without completed gate review. All mandatory checkpoints must pass. Document all decisions with rationale and dissenting opinions.
Evidence Types
Gate review record, Evidence submission log, Gate decision document
Audit Frequency
continuous
Remediation
Failed gate review requires remediation plan within 10 business days. Second consecutive failure triggers executive sponsor escalation.
Incident Response Protocol
Establish and test AI-specific incident response procedures including detection, containment, investigation, remediation, and post-incident review.
Implementation details
Implementation Guidance
Define AI incident severity levels (P1-P4). Maintain a 24/7 escalation matrix. Conduct tabletop exercises quarterly. Post-incident reviews must produce at least one control improvement.
Evidence Types
Incident log, Severity classification record, Post-incident review report
Audit Frequency
quarterly
Remediation
Unresolved P1 incidents after 24 hours escalate to executive sponsor. Recurring incidents of same type require systemic remediation plan.
Change Management for AI Systems
Control changes to AI systems through a formal change advisory process that evaluates impact on performance, fairness, safety, and downstream systems.
Implementation details
Implementation Guidance
All production model changes require a Change Advisory Board review for high-risk systems. Low-risk changes follow an expedited review path. Document rollback procedures for every change.
Evidence Types
Change request log, Impact assessment, Approval record, Rollback documentation
Audit Frequency
continuous
Remediation
Unauthorized changes require immediate incident report. Conduct root cause analysis and strengthen access controls within 5 business days.
Training and Competency Assurance
Ensure all personnel involved in AI governance and operations meet role-specific competency requirements through structured training programs.
Implementation details
Implementation Guidance
Define competency requirements per role. Track training completion with minimum 90% target for impacted users. Conduct annual competency reassessment.
Evidence Types
Training completion records, Competency assessment results, Certification records
Audit Frequency
quarterly
Remediation
Personnel below competency threshold must complete remedial training within 30 days. Restrict system access for critical roles until competency confirmed.
Evidence Repository Management
Maintain a centralized, tamper-evident evidence repository with retention policies and chain-of-custody controls for all governance artifacts.
Implementation details
Implementation Guidance
All governance evidence stored in centralized repository with version control, access logging, and minimum 7-year retention. Implement immutable audit trail.
Evidence Types
Repository access log, Retention policy, Chain-of-custody records
Audit Frequency
quarterly
Remediation
Evidence integrity failures require immediate investigation. Restore from backup and re-verify chain of custody within 48 hours.
Regulatory Mapping and Tracking
Map all AI systems to applicable regulations (EU AI Act, GDPR, sector-specific rules) and track compliance status with automated alerting for regulatory changes.
Implementation details
Implementation Guidance
Maintain a regulatory register mapping each AI system to applicable laws. Subscribe to regulatory change feeds. Conduct impact assessments within 30 days of new regulation publication.
Evidence Types
Regulatory map, Gap analysis report, Remediation tracker
Audit Frequency
quarterly
Remediation
Critical compliance gaps require remediation plan within 10 business days. Report material gaps to legal counsel and executive sponsor.
EU AI Act Risk Classification
Classify all AI systems per EU AI Act risk tiers (unacceptable, high, limited, minimal) with documented rationale and conformity assessment documentation.
Implementation details
Implementation Guidance
Classify every registered AI system per Annex III criteria. High-risk systems require conformity assessment. Document classification rationale with legal review.
Evidence Types
Classification register, Risk tier justification documents, Conformity assessment records
Audit Frequency
quarterly
Remediation
Unclassified systems may not proceed past Model stage. Reclassification triggers require reassessment within 15 business days.
Privacy Impact Assessment
Conduct Data Protection Impact Assessments (DPIAs) for AI systems processing personal data in alignment with GDPR and applicable privacy regulations.
Implementation details
Implementation Guidance
Trigger a DPIA for any AI system processing personal data at scale. Include data minimization analysis, purpose limitation review, and data subject rights impact. Obtain DPO sign-off.
Evidence Types
DPIA report, DPO sign-off, Privacy risk register
Audit Frequency
annual
Remediation
Systems with unacceptable privacy risk may not deploy. Conduct privacy remediation and re-assess within 20 business days.
Audit Trail Completeness
Ensure all AI governance decisions, model changes, access events, and control activities generate immutable audit trail entries meeting regulatory retention requirements.
Implementation details
Implementation Guidance
Implement structured logging for all governance events. Use append-only storage for audit records. Validate completeness monthly through automated gap analysis. Retain records per regulatory requirements (minimum 5 years).
Evidence Types
Audit log completeness report, Retention compliance certificate, Trail integrity verification
Audit Frequency
quarterly
Remediation
Audit trail gaps require immediate investigation. Reconstruct missing entries from backup sources within 48 hours.
Sector-Specific Compliance Validation
Validate compliance with sector-specific AI regulations and guidelines (financial services, healthcare, critical infrastructure) beyond horizontal AI legislation.
Implementation details
Implementation Guidance
Identify applicable sector-specific requirements during system registration. Engage sector compliance specialists for high-risk deployments. Document sector-specific testing and validation procedures.
Evidence Types
Sector compliance report, Specialist assessor sign-off, Sector-specific test results
Audit Frequency
annual
Remediation
Sector compliance failures require engagement of specialist counsel within 10 business days. Suspend affected system operations until remediated.
Cross-Border Data Transfer Governance
Govern cross-border AI data transfers in compliance with applicable data sovereignty and transfer regulations including SCCs and adequacy decisions.
Implementation details
Implementation Guidance
Identify all cross-border data flows in AI pipelines. Assess transfer legality per jurisdiction. Implement appropriate safeguards (SCCs, BCRs, adequacy decisions).
Evidence Types
Transfer impact assessment, Adequacy decision records, Standard contractual clauses
Audit Frequency
annual
Remediation
Illegal data transfers must cease immediately. Implement alternative transfer mechanism within 30 business days or relocate processing.
Model Performance Monitoring
Continuously monitor AI model performance against defined KPIs including accuracy, latency, throughput, and drift detection with automated alerting on threshold breaches.
Implementation details
Implementation Guidance
Deploy monitoring dashboards for each production model. Set alerting thresholds at 2 standard deviations from baseline. Implement data drift detection using statistical tests (PSI, KL divergence).
Evidence Types
Performance dashboard, Drift detection report, Alert trigger log
Audit Frequency
continuous
Remediation
Model drift exceeding thresholds triggers retraining or retirement review. Performance degradation must be investigated within 5 business days.
AI System Security Hardening
Apply security controls specific to AI systems including model access controls, adversarial attack protection, prompt injection prevention, and inference endpoint security.
Implementation details
Implementation Guidance
Conduct AI-specific penetration testing including adversarial input testing. Implement rate limiting on inference endpoints. Apply input sanitization for all user-facing models.
Evidence Types
Security assessment report, Penetration test results, Access control matrix
Audit Frequency
quarterly
Remediation
Critical vulnerabilities require patching within 48 hours. High vulnerabilities within 10 business days. Conduct post-remediation verification.
Data Pipeline Integrity
Ensure data pipelines feeding AI models maintain data quality, lineage tracking, and integrity verification from source through feature engineering to model input.
Implementation details
Implementation Guidance
Implement data quality gates at pipeline stages. Track full data lineage from source to model. Run automated data validation checks on every pipeline execution.
Evidence Types
Data quality report, Lineage graph, Pipeline validation results
Audit Frequency
continuous
Remediation
Data quality below threshold suspends model training. Remediate data issues and re-validate within 10 business days.
Infrastructure Resilience and Recovery
Ensure AI infrastructure has appropriate redundancy, failover mechanisms, and disaster recovery procedures to meet availability SLAs.
Implementation details
Implementation Guidance
Define availability SLAs per AI system criticality tier. Implement active-passive or active-active redundancy for critical systems. Test failover procedures quarterly.
Evidence Types
DR plan, Backup verification records, Failover test results
Audit Frequency
quarterly
Remediation
DR test failures require remediation within 15 business days. RTO/RPO breaches during incidents trigger infrastructure review.
Model Explainability Validation
Implement and validate explainability mechanisms appropriate to the AI system risk tier, from feature importance for low-risk to full decision traceability for high-risk systems.
Implementation details
Implementation Guidance
Define explainability requirements per risk tier during Model stage. Implement SHAP/LIME for tabular models, attention visualization for transformers. Validate explanations with domain experts.
Evidence Types
Explainability specification, User-facing explanation samples, Audience testing results
Audit Frequency
quarterly
Remediation
Systems failing explainability requirements for high-risk decisions require enhanced explanation mechanisms within 20 business days.
Bias Testing and Fairness Assessment
Conduct systematic bias testing across protected characteristics before and after deployment, using statistical fairness metrics appropriate to the use case context.
Implementation details
Implementation Guidance
Select fairness metrics aligned with the use case (demographic parity, equalized odds, etc.). Test across all protected characteristics. Document acceptable thresholds and remediation actions when breached.
Evidence Types
Bias test results, Fairness metrics report, Mitigation action log
Audit Frequency
quarterly
Remediation
Bias exceeding acceptable thresholds blocks deployment. Implement mitigation within 15 business days and re-test. Escalate persistent bias to Ethics Committee.
Human Oversight Design
Ensure appropriate human oversight mechanisms are designed into AI systems proportional to their risk level, from monitoring for low-risk to human-in-the-loop for high-risk decisions.
Implementation details
Implementation Guidance
Define the human oversight model during system design. Test override mechanisms. Ensure human reviewers receive training on the AI system they oversee. Monitor override rates.
Evidence Types
HITL configuration, Override log, Decision audit trail
Audit Frequency
quarterly
Remediation
HITL bypass on high-risk systems requires immediate incident report. Disable autonomous decision-making until HITL is restored.
Transparency and Disclosure
Ensure affected individuals are informed when interacting with or subject to AI systems, with clear disclosures about AI involvement, data usage, and decision factors.
Implementation details
Implementation Guidance
Implement user-facing AI disclosure notices. Provide explanations for automated decisions upon request. Maintain public AI transparency reports for high-impact systems.
Evidence Types
Disclosure notice, Transparency report, User awareness survey results
Audit Frequency
quarterly
Remediation
Missing transparency disclosures on regulated systems require immediate publication. Update transparency reports within 10 business days.
Red Teaming and Adversarial Testing
Conduct structured red teaming exercises to identify potential misuse, unintended consequences, and failure modes of AI systems before and during deployment.
Implementation details
Implementation Guidance
Assemble diverse red teams including non-technical stakeholders. Define testing scenarios covering adversarial use, edge cases, and failure modes. Document findings and remediation status.
Evidence Types
Red team test plan, Finding report, Remediation log
Audit Frequency
quarterly
Remediation
Critical red team findings block deployment. Remediate and re-test within 15 business days.
Impact Assessment for Affected Populations
Assess AI system impact on affected populations including vulnerable groups, with documented stakeholder consultation and mitigation measures.
Implementation details
Implementation Guidance
Conduct impact assessment for all AI systems affecting individuals. Identify vulnerable populations. Consult affected stakeholders. Document findings and mitigation measures.
Evidence Types
Impact assessment report, Stakeholder consultation records, Vulnerability analysis
Audit Frequency
annual
Remediation
Unacceptable impacts on vulnerable populations block deployment. Redesign system with stakeholder input within 30 business days.
Training Data Governance
Govern the sourcing, curation, labeling, and quality assurance of training data, ensuring datasets are representative, legally obtained, and documented.
Implementation details
Implementation Guidance
Create data cards for each training dataset documenting provenance, composition, known biases, and licensing. Implement data quality checks covering completeness, accuracy, and representativeness.
Evidence Types
Data card, Quality metric dashboard, Legal clearance record
Audit Frequency
quarterly
Remediation
Data quality below threshold suspends model training. Remediate data issues and re-validate within 10 business days.
Data Access Control and Segregation
Implement role-based access controls for AI training and inference data, with segregation between environments and least-privilege access principles.
Implementation details
Implementation Guidance
Implement RBAC for all data stores used by AI systems. Enforce environment segregation (dev/staging/prod). Log all data access events. Conduct quarterly access reviews.
Evidence Types
Access control matrix, RBAC configuration, Access review report
Audit Frequency
quarterly
Remediation
Unauthorized data access requires immediate access revocation and incident report. Conduct access review within 5 business days.
Data Retention and Disposal
Enforce data retention policies for AI training data, inference logs, and model artifacts, with secure disposal procedures when retention periods expire.
Implementation details
Implementation Guidance
Define retention periods per data type aligned with regulatory requirements. Implement automated expiry and disposal workflows. Generate disposal certificates.
Evidence Types
Retention schedule, Disposal certificate, Destruction log
Audit Frequency
annual
Remediation
Data retained beyond policy limits must be reviewed and disposed within 30 business days. Document exceptions with legal approval.
Data Lineage and Provenance Tracking
Track the complete lineage of data used in AI systems from original source through all transformations, enabling root-cause analysis and compliance verification.
Implementation details
Implementation Guidance
Implement automated lineage tracking in data pipelines. Capture transformation metadata at each processing step. Enable lineage queries from any model prediction back to source data.
Evidence Types
Lineage diagram, Transformation log, Provenance certificate
Audit Frequency
quarterly
Remediation
Undocumented data sources must be traced and documented within 15 business days or excluded from AI pipelines.
Consent and Rights Management
Manage data subject consent and rights (access, rectification, erasure, portability) for AI training and inference data in compliance with privacy regulations.
Implementation details
Implementation Guidance
Track consent per data subject and purpose. Process rights requests within regulatory timelines (30 days GDPR). Implement automated consent verification in AI pipelines.
Evidence Types
Consent records, Rights request log, Fulfillment records
Audit Frequency
continuous
Remediation
Consent violations require immediate data quarantine. Process outstanding rights requests within 5 business days. Report violations to DPO.
Third-Party AI Risk Assessment
Assess risks of third-party AI services and embedded AI components in vendor products, including supply chain risks, data handling practices, and vendor lock-in.
Implementation details
Implementation Guidance
Evaluate all third-party AI vendors using a standardized risk questionnaire. Assess data residency, model transparency, and contractual protections. Require SOC 2 or equivalent.
Evidence Types
Vendor risk assessment, Due diligence report, Contract review
Audit Frequency
annual
Remediation
Vendors failing risk assessment may not be onboarded. Existing vendors with deteriorating risk posture require remediation plan within 30 business days.
Vendor Contractual Safeguards
Ensure contracts with AI vendors include provisions for data ownership, model transparency, audit rights, liability allocation, and exit provisions.
Implementation details
Implementation Guidance
Include AI-specific clauses: no training on customer data without consent, model card provision, audit access rights, SLA for bias remediation, data portability on termination.
Evidence Types
Contract clauses, Legal review record, DPA
Audit Frequency
annual
Remediation
Vendors without valid contractual safeguards must renegotiate within 30 business days or face relationship termination review.
Vendor Performance and Risk Monitoring
Continuously monitor third-party AI vendor performance, availability, and risk posture, with escalation procedures for SLA breaches.
Implementation details
Implementation Guidance
Track vendor uptime, latency, and accuracy metrics. Subscribe to vendor security advisories. Conduct annual vendor risk reassessments.
Evidence Types
SLA tracking report, Performance dashboard, Breach notification log
Audit Frequency
quarterly
Remediation
Repeated SLA breaches trigger vendor review. Three consecutive monthly breaches require executive-level vendor management escalation.
Vendor Concentration Risk Management
Monitor and manage concentration risk from over-dependence on single AI vendors or model providers with contingency planning.
Implementation details
Implementation Guidance
Map vendor dependencies across AI portfolio. Identify single points of failure. Develop contingency plans for critical vendor disruption.
Evidence Types
Vendor dependency map, Concentration analysis, Contingency plan
Audit Frequency
quarterly
Remediation
Critical single-vendor dependencies require documented contingency plan within 20 business days. Consider multi-vendor strategy for high-risk capabilities.
Vendor Exit Planning
Maintain actionable exit plans for all critical AI vendors, ensuring data portability, service continuity, and knowledge transfer.
Implementation details
Implementation Guidance
Document exit procedures including data extraction, model replacement options, and timeline estimates. Test exit procedures annually for critical vendors.
Evidence Types
Exit plan, Data portability test results, Transition playbook
Audit Frequency
annual
Remediation
Vendors without exit plan may not be classified as critical dependencies. Test and validate exit plan annually.
Agent Autonomy Tier Classification
Classify all agentic AI systems by autonomy level (advisory, semi-autonomous, autonomous) with tier-appropriate controls, monitoring, and human oversight.
Implementation details
Implementation Guidance
Assess each agent against autonomy tier criteria during registration. Require governance board approval for T3+ deployments. Re-assess when agent capabilities change.
Evidence Types
Autonomy classification register, Risk tier mapping, HITL requirement specification
Audit Frequency
quarterly
Remediation
Unclassified agents may not operate in production. Reclassification required within 5 business days of capability change.
Agent Tool and Data Access Boundaries
Define and enforce boundaries on what tools, APIs, data sources, and actions an agentic AI system is authorized to access, with boundary violation monitoring.
Implementation details
Implementation Guidance
Define explicit allow-lists of tools and data sources per agent. Implement runtime boundary enforcement. Log all tool invocations and data access. Alert on attempted access outside boundaries.
Evidence Types
Boundary specification, Access control configuration, Boundary violation log
Audit Frequency
continuous
Remediation
Boundary violations trigger immediate agent suspension. Conduct root cause analysis within 24 hours. Restore with tightened boundaries.
Agent Kill Switch and Fallback
Implement and regularly test kill switch mechanisms for all agentic systems, ensuring immediate halt capability and reversion to manual or degraded processes.
Implementation details
Implementation Guidance
Deploy kill switches at infrastructure level (process termination) and application level (graceful shutdown). Test activation quarterly. Define fallback procedures. Measure halt-to-safe-state time.
Evidence Types
Kill switch test results, Fallback procedure documentation, Activation log
Audit Frequency
quarterly
Remediation
Agents without functional kill switch must be suspended from production immediately. Restore only after kill switch verification.
Agent Interaction Audit Trail
Monitor and log all agent interactions including inter-agent communications, tool usage, decision chains, and human handoff events with immutable retention.
Implementation details
Implementation Guidance
Implement comprehensive interaction logging covering requests, plans, tool calls, and responses. Store logs with immutable timestamps. Generate daily summaries for T3+ agents.
Evidence Types
Audit trail sample, Log completeness report, Trail integrity verification
Audit Frequency
continuous
Remediation
Gaps in audit trail require immediate investigation. Agents with compromised trails must be suspended until logging integrity is restored.
Agent Simulation and Testing
Conduct simulation testing for AI agents in sandboxed environments before production deployment and after capability changes covering normal, edge, and adversarial scenarios.
Implementation details
Implementation Guidance
Test all agents in sandboxed environments covering: normal operations, edge cases, adversarial scenarios, and boundary conditions. Achieve minimum 80% scenario coverage.
Evidence Types
Simulation test plan, Test results, Coverage report
Audit Frequency
quarterly
Remediation
Agents failing simulation testing may not deploy to production. Remediate and re-test within 10 business days.
Multi-Agent Coordination Governance
Govern multi-agent systems to prevent emergent behaviors, resource conflicts, and cascading failures through coordination protocols and interaction boundaries.
Implementation details
Implementation Guidance
Define interaction protocols for multi-agent systems. Implement deadlock detection and resolution. Set resource usage limits per agent. Conduct chaos engineering tests.
Evidence Types
Coordination protocol, Interaction log, Emergent behavior report
Audit Frequency
quarterly
Remediation
Unexpected emergent behaviors trigger multi-agent suspension. Conduct coordination review within 5 business days before re-enabling.
Regulatory Classification Gate
Mandatory classification of all AI systems against applicable regulatory frameworks (EU AI Act, NIST AI RMF, US state laws, sector-specific regulations) before production deployment. No AI system may enter production without a completed regulatory classification record.
Implementation details
Implementation Guidance
During the Model stage, map every AI system to its applicable regulatory landscape. Classify per EU AI Act risk tiers (Annex III criteria), identify applicable NIST AI RMF functions, and document US state-level requirements. The Produce stage gate must verify that classification is complete before deployment authorization. Legal counsel must sign off on high-risk classifications.
Evidence Types
Regulatory classification register, Risk tier justification per regulation, Legal review sign-off, Cross-framework mapping document
Audit Frequency
quarterly
Remediation
Systems without completed regulatory classification may not proceed past the Produce stage gate. Unclassified systems discovered in production require emergency classification within 10 business days and may be suspended pending completion.
Conformity Evidence Gate
Mandatory conformity evidence collection and verification during the Evaluate stage. Ensures all AI systems operating under regulatory obligations have complete, current, and verified conformity evidence packages aligned to each applicable framework.
Implementation details
Implementation Guidance
During Produce, initiate evidence collection for each applicable regulation. During Evaluate, verify evidence completeness against the regulatory classification register. Conduct conformity self-assessment using framework-specific checklists (EU AI Act Annex IV documentation, NIST AI RMF GOVERN/MAP/MEASURE/MANAGE profiles). Engage external assessors for high-risk systems where required by regulation.
Evidence Types
Conformity assessment report, Evidence completeness checklist, Regulatory documentation package, Third-party audit report
Audit Frequency
quarterly
Remediation
Incomplete conformity evidence packages require remediation plan within 15 business days. Systems with critical evidence gaps must implement compensating controls while remediation is in progress. Report material gaps to legal counsel and executive sponsor.
Regulatory Change Response
Triggered when the regulatory landscape changes — new regulations enacted, existing regulations amended, enforcement guidance published, or court rulings affect AI governance obligations. Ensures the organization responds systematically rather than reactively.
Implementation details
Implementation Guidance
Subscribe to regulatory change feeds for all applicable jurisdictions. When a change is detected, conduct impact assessment within 30 days identifying affected AI systems, controls, and evidence packages. Update the regulatory classification register. Communicate changes to affected system owners and governance bodies. Track remediation to completion.
Evidence Types
Regulatory change notification log, Impact assessment report, Remediation plan, Updated classification records
Audit Frequency
continuous
Remediation
Regulatory changes with compliance deadlines require a project plan within 15 business days. Material changes affecting high-risk systems must be escalated to executive sponsor and legal counsel within 5 business days of detection.
Cross-Framework Alignment Verification
Ensures that AI systems subject to multiple regulatory frameworks (e.g., EU AI Act + GDPR + ISO 42001 + sector-specific rules) maintain consistent compliance posture across all applicable standards. Prevents framework-specific compliance silos that create gaps.
Implementation details
Implementation Guidance
Maintain a cross-framework mapping matrix showing how each control satisfies requirements across multiple standards. During Model, design controls that satisfy overlapping requirements from a single implementation. During Evaluate, verify alignment by auditing control effectiveness against each framework simultaneously. During Learn, update mappings based on regulatory changes and audit findings.
Evidence Types
Cross-framework mapping matrix, Alignment verification report, Gap analysis for overlapping requirements, Harmonized control register
Audit Frequency
quarterly
Remediation
Cross-framework alignment gaps require investigation within 15 business days. Control implementations that satisfy one framework but violate another must be redesigned with legal and compliance input within 30 business days.
Foundation Model Selection Criteria Verification
Verify that all foundation model selections have been evaluated against the 7-category weighted selection criteria (technical capability, transparency, safety, regulatory compliance, provider governance, licensing & IP, cost & sustainability) with documented scoring and justification.
Implementation details
Implementation Guidance
Before any foundation model is adopted, complete the Foundation Model Selection Criteria scorecard (ART-072). Require minimum scores per category based on the deployment risk tier. Document the rationale for model selection including alternatives considered and rejected.
Evidence Types
Selection scorecard, Evaluation report, Vendor assessment, Approval record
Audit Frequency
quarterly
Remediation
Foundation models adopted without completed selection criteria must undergo retroactive evaluation within 15 business days. Models failing minimum score thresholds must have a documented risk acceptance or be replaced within 30 business days.
Provider Obligation Compliance Check
Verify that foundation model providers meet their obligations under EU AI Act Articles 53-55 for GPAI models, including technical documentation, training data summaries, copyright compliance policies, and downstream deployer information provision.
Implementation details
Implementation Guidance
During Model stage, assess each foundation model provider against EU AI Act GPAI obligations. Document which obligations are met, partially met, or unmet. For unmet obligations, assess the risk transfer to the deployer and document mitigation measures. Re-verify during Evaluate stage or when provider updates the model.
Evidence Types
Provider compliance checklist, GPAI documentation review, Provider attestation, Gap analysis
Audit Frequency
quarterly
Remediation
Providers failing critical obligations (technical documentation, training data summary) trigger an escalation to legal counsel within 5 business days. Deployers must document compensating controls or plan provider transition within 60 business days.
Model Card Completeness Verification
Verify that every deployed foundation model has a complete model card meeting EU AI Act Article 53 transparency requirements, including model details, intended use, training data, evaluation, ethical considerations, technical specifications, regulatory compliance, and maintenance information.
Implementation details
Implementation Guidance
Use the Model Card Template (ART-073) to create or verify model cards for all foundation models. All 8 required sections must be completed. Model cards must be updated within 30 days of any model version change. Assign a model card owner responsible for currency.
Evidence Types
Completed model card, Completeness assessment, Reviewer sign-off, Version history
Audit Frequency
quarterly
Remediation
Models with incomplete model cards may not proceed to production deployment. Incomplete cards must be remediated within 10 business days. Cards not updated after model version changes must be flagged and updated within 15 business days.
Fine-Tuning Governance Approval
Ensure all foundation model fine-tuning activities are authorized through the appropriate approval workflow based on fine-tuning risk tier (low, medium, high). No fine-tuning may proceed without documented authorization and risk assessment.
Implementation details
Implementation Guidance
Classify every fine-tuning request by risk tier using the Fine-Tuning Governance Policy (ART-074). Route to the appropriate approval authority per tier. Verify data governance requirements are met before training begins. Document the authorization chain and conditions.
Evidence Types
Fine-tuning request, Risk tier classification, Approval record, Data governance documentation
Audit Frequency
continuous
Remediation
Unauthorized fine-tuning activities must be immediately suspended. Conduct a retroactive risk assessment within 5 business days. If the fine-tuned model is in production, initiate rollback evaluation within 24 hours.
Training Data Provenance Verification
Verify that all training data used for foundation model fine-tuning has documented provenance, legal basis for use, copyright clearance, and appropriate data governance controls including PII handling and bias assessment.
Implementation details
Implementation Guidance
Before fine-tuning begins, document the provenance of all training data: source, collection method, consent basis, PII presence, and copyright status. Conduct a bias assessment on training data to identify demographic or domain representation gaps. Maintain data lineage records sufficient for regulatory audit.
Evidence Types
Data provenance records, Legal basis documentation, PII assessment, Copyright clearance, Bias analysis
Audit Frequency
continuous
Remediation
Training data without documented provenance must not be used for fine-tuning. Existing fine-tuned models using undocumented data require retroactive provenance assessment within 15 business days or model replacement.
Model Lifecycle Stage Gate
Enforce stage gates for foundation model lifecycle transitions: selection, deployment, fine-tuning, version update, deprecation, and retirement. Each transition requires documented approval, impact assessment, and stakeholder notification.
Implementation details
Implementation Guidance
Define stage gates for each model lifecycle transition. Selection requires completed selection criteria (FMC-001). Deployment requires model card (FMC-003) and testing evidence. Version updates require change impact assessment. Deprecation requires migration plan and stakeholder notification with minimum 90-day notice. Retirement requires data archival confirmation.
Evidence Types
Lifecycle transition record, Impact assessment, Approval record, Stakeholder notification log, Deprecation plan
Audit Frequency
quarterly
Remediation
Models that bypass lifecycle stage gates must be retroactively assessed within 10 business days. Missing gate artifacts must be produced within 15 business days. Repeated gate bypasses trigger governance process review.