9 governance artifact templates — structured documents that organizations use to demonstrate AI governance compliance and build audit-ready evidence packs.
Each template maps to specific standards clauses (ISO 42001, NIST AI RMF, EU AI Act) and COMPEL stages. Templates include section guidance, field-level instructions, and FAQs.
ea-001 Model
AI Policy Template
Enterprise AI Use Policy Document
A comprehensive enterprise AI use policy covering purpose, scope, guiding principles, roles and responsibilities, acceptable and prohibited uses, incident reporting, and review cycle. This template provides the foundation document that every AI governance program requires.
Standards Mapping
ISO 42001 Clause 5.2 ISO 42001 Annex A.2 NIST AI RMF GOVERN 1.1 NIST AI RMF GOVERN 1.2 EU AI Act Article 9(1)
COMPEL Domains
D14: AI Strategy Alignment D15: Ethics & Fairness D18: Governance Structure
Template sections (8)
Policy Header
Document control information for version tracking, ownership, and approval.
- text Policy Title — Use a clear, unambiguous title that specifies the scope (enterprise-wide vs. business unit).
- text Document ID — Follow your organization's document numbering scheme.
- text Version — Use semantic versioning. Major version for structural changes, minor for updates.
- text Effective Date — Date from which this policy is enforceable. Allow adequate communication lead time.
- text Policy Owner — Name the individual role accountable for maintaining and enforcing this policy.
- text Approved By — Policy must have executive-level approval to ensure enforcement authority.
- select Review Cycle — ISO 42001 requires periodic review. Annual is minimum; regulated industries may need quarterly.
- select Classification — Classify per your information security framework.
Purpose and Scope
Define why this policy exists and who/what it applies to.
- textarea Purpose Statement — State the business rationale: risk management, regulatory compliance, responsible innovation. Be specific about what the policy aims to achieve.
- textarea Scope — Be explicit about who is covered. Include third-party AI tools (SaaS, APIs), embedded AI in vendor products, and internally developed models.
- textarea Definitions — Define "AI system" using the OECD/EU AI Act definition to ensure regulatory alignment. Also define "AI use case," "high-risk AI," and other key terms.
- textarea Out of Scope — Clearly state what is NOT covered to prevent scope ambiguity.
Guiding Principles
Core principles that guide all AI development and use within the organization.
- textarea Transparency — Map to NIST AI RMF trustworthy AI characteristics and EU AI Act transparency requirements (Articles 13, 52).
- textarea Fairness and Non-Discrimination — Reference applicable anti-discrimination legislation and ISO 42001 Annex A.7 (AI system impact assessment).
- textarea Accountability — Align with EU AI Act Article 14 (human oversight) and ISO 42001 Clause 5.3 (organizational roles).
- textarea Safety and Security — Reference ISO 42001 Annex A.6 (risk assessment) and NIST AI RMF MANAGE function.
- textarea Privacy — Cross-reference your data protection policy. Ensure alignment with GDPR Article 22 (automated decision-making) where applicable.
- textarea Sustainability — Emerging requirement in EU AI Act recitals and ISO 42001 Annex A.4. Include where organizational ESG commitments exist.
Roles and Responsibilities
Define who is responsible for AI governance activities.
- textarea AI Governance Board / Ethics Committee — Define composition, decision authority, meeting cadence, and escalation rights. ISO 42001 Clause 5.1 requires top management commitment.
- textarea AI Center of Excellence (CoE) — The CoE is the operational arm of governance. Define whether it has advisory or enforcement authority.
- textarea Business Unit AI Owners — Each AI system must have a named owner at the business unit level. This is distinct from the technical maintainer.
- textarea All Employees — Include expectations for all staff, not just AI practitioners. Shadow AI detection depends on broad awareness.
Acceptable and Prohibited Uses
Clear rules on what AI uses are permitted, restricted, or prohibited.
- checklist Permitted Uses (No Prior Approval Required) — List specific tool categories and use cases that do not require case-by-case approval. Keep this list maintained as a living appendix.
- checklist Restricted Uses (Approval Required) — These require a formal risk assessment and approval from the AI Governance Board. Define the approval workflow.
- checklist Prohibited Uses — Align with EU AI Act Article 5 (prohibited practices). These are absolute prohibitions with no exception process.
- textarea Shadow AI Policy — Shadow AI is one of the biggest governance risks. Define reporting channels and whether there is an amnesty period for self-reporting.
Risk Classification
Framework for categorizing AI systems by risk level.
- table Risk Tier Definitions — Align tiers with EU AI Act risk categories (Article 6). Define clear criteria for each tier, not just labels.
- checklist Classification Criteria — Use a scoring matrix to determine the tier. Each criterion should be scored on a defined scale.
- table Governance Requirements by Tier — Higher tiers require progressively more documentation, testing, and oversight. Map each tier to specific COMPEL artifacts.
Incident Reporting and Response
Procedures for AI-related incidents and near misses.
- textarea Incident Definition — Define broadly to capture near misses. Include model drift, unexpected outputs, data breaches involving training data, and bias detection.
- text Reporting Channel — Make the channel easily accessible. Consider anonymous reporting options for sensitive issues.
- table Response Timeframes — Align with your existing incident management SLAs. EU AI Act Article 62 requires serious incident reporting to authorities.
- textarea Post-Incident Review — Feed lessons learned back into the policy and risk assessment process. This supports the COMPEL Learn stage.
Compliance and Enforcement
How policy compliance is monitored and enforced.
- textarea Compliance Monitoring — Define both proactive monitoring (scheduled reviews) and reactive monitoring (incident-driven audits).
- textarea Non-Compliance Consequences — Be specific but proportionate. Minor first-time violations should have a remediation path; deliberate violations warrant stronger action.
- textarea Exception Process — A policy without an exception process leads to shadow non-compliance. Document every exception and its rationale.
FAQs (4)
How does the AI Policy relate to existing IT and data governance policies?
The AI Policy should cross-reference and complement existing policies (data governance, information security, privacy, ethics). It does not replace them but adds AI-specific requirements. Where conflicts exist, the more restrictive requirement applies unless the AI Governance Board approves an exception.
How often should the AI Policy be updated?
ISO 42001 requires periodic review. At minimum, conduct a formal review annually and after any significant regulatory change, major AI incident, or material change in AI usage patterns. The policy should be a living document with a clear change management process.
What if employees are already using AI tools not covered by this policy?
Implement a time-limited amnesty and registration period. All existing AI uses should be registered within the AI System Registry within 90 days. After the amnesty period, unauthorized use falls under the enforcement provisions of this policy.
How does this map to ISO 42001 certification requirements?
This policy template addresses ISO 42001 Clause 5.2 (AI Policy) directly. When completed and implemented, it provides the foundational policy document required for ISO 42001 management system certification. The policy must be communicated, available, and reviewed as stated in Clause 5.2.
ea-002 Calibrate
AI Use-Case Register
AI System Inventory and Risk Classification
A structured inventory for all AI systems and use cases with risk classification, ownership, approval status, and review cycle. The foundation of AI governance visibility — you cannot govern what you do not know exists.
Standards Mapping
ISO 42001 Clause 6.1.2 ISO 42001 Annex A.3 EU AI Act Article 9 EU AI Act Article 51 NIST AI RMF MAP 1.1
COMPEL Domains
D5: Use Case Management D17: Risk Management D16: Regulatory Compliance
Template sections (3)
Register Metadata
Document control for the AI system inventory.
- text Register Title — Use a standard naming convention across your governance documents.
- text Register Owner — The CoE typically maintains the central register. Business units populate their entries.
- text Last Full Review — Record when the entire register was last validated for completeness and accuracy.
- text Total Registered Systems — Maintained automatically. Useful for board reporting and trend analysis.
System / Use-Case Entry
One entry per AI system or use case. Duplicate this section for each system.
- text System ID — Unique identifier. Use a consistent scheme: AI-SYS-NNN for systems, AI-UC-NNN for use cases.
- text System Name — Human-readable name. Be specific — "AI Tool" is insufficient.
- textarea Description — Describe what the system does, what inputs it uses, what outputs it produces, and how those outputs are used in business processes.
- select AI Type — Classify the AI type. "Embedded" covers AI capabilities within vendor products (e.g., CRM lead scoring).
- text Business Unit Owner — Name the business function and the accountable individual. This person owns the governance obligations.
- text Technical Owner — Name the individual responsible for the technical implementation, monitoring, and maintenance.
- select Risk Classification — Use the risk tiers defined in your AI Policy. This determines the governance requirements for this system.
- select Status — Track the lifecycle stage. Systems in "Discovery" have been identified but not yet assessed.
- text Approval Date — Date the system was formally approved for deployment. N/A for systems still in assessment.
- text Approved By — Record the approving authority. High-risk systems typically require board-level approval.
- text Next Review Date — Scheduled date for the next mandatory review. High-risk systems: quarterly. Others: annually minimum.
- textarea Data Sources — List all data sources used for training and inference. Flag any personal data or sensitive categories.
- textarea Affected Stakeholders — Identify who is affected by this system's outputs, directly and indirectly.
- checklist Regulatory Applicability — Identify all regulations that may apply. This drives the compliance requirements.
Shadow AI Discovery Log
Track unregistered AI systems discovered through audits or self-reporting.
- text Discovery Date — When the unauthorized AI use was identified.
- select Discovery Method — Track how shadow AI is being found to improve detection methods.
- textarea Description of Use — Describe what was found, who was using it, and how it was being used.
- select Risk Assessment — Fast-track risk assessment for discovered shadow AI.
- select Remediation Action — Record the outcome. Not all shadow AI needs to be eliminated — some can be formalized.
FAQs (3)
How do we find AI systems we don't know about?
Combine multiple discovery methods: network traffic analysis for AI API calls, procurement/expense audits for AI tool subscriptions, employee surveys about tool usage, vendor product reviews for embedded AI features, and a self-reporting channel with amnesty provisions. COMPEL's Calibrate stage includes a structured shadow AI discovery protocol.
What counts as an "AI system" for registry purposes?
Use a broad definition aligned with the OECD/EU AI Act: any system that uses machine learning, deep learning, or knowledge-based approaches to generate outputs (predictions, recommendations, decisions, content) that influence environments or decisions. Include third-party AI embedded in vendor products and employee use of public GenAI tools.
How often should the register be reviewed?
The full register should be validated quarterly. Individual system entries should be reviewed according to their risk tier: critical systems monthly, high-risk quarterly, others annually. New systems must be registered before deployment.
ea-003 Model
Risk Assessment Template
AI System Risk Evaluation Framework
A structured risk evaluation template covering impact dimensions, likelihood assessment, existing and planned controls, residual risk calculation, and escalation triggers. Designed to produce audit-ready risk documentation for each AI system.
Standards Mapping
ISO 42001 Clause 6.1.2 ISO 42001 Annex A.5 ISO 42001 Annex A.6 NIST AI RMF MAP 2.1 NIST AI RMF MEASURE 1.1 EU AI Act Article 9(2)
COMPEL Domains
D17: Risk Management D15: Ethics & Fairness D16: Regulatory Compliance
Template sections (5)
Assessment Header
Identification and context for this risk assessment.
- text Assessment ID — Link to the system ID in the AI Use-Case Register. Include version number.
- text AI System Reference — Reference the exact system ID and name from the register.
- text Assessment Date — Date the assessment was conducted.
- text Assessor(s) — Risk assessments should involve multiple perspectives. Name all participants.
- select Assessment Trigger — Document why this assessment is being performed.
Impact Assessment
Evaluate the potential impacts of the AI system across multiple dimensions.
- textarea Individual Impact — Consider: discrimination risk, privacy implications, autonomy restriction, financial impact, physical safety, psychological wellbeing. Score: None / Low / Medium / High / Critical.
- textarea Societal Impact — Consider: social inequality amplification, democratic process influence, labor market effects, environmental impact. Score: None / Low / Medium / High / Critical.
- textarea Organizational Impact — Consider: financial loss, regulatory penalties, reputational harm, operational disruption, legal liability. Score: None / Low / Medium / High / Critical.
- textarea Data and Privacy Impact — Consider: personal data processing, data breach consequences, consent adequacy, cross-border data transfer, data retention requirements.
- textarea Bias and Fairness Impact — Consider: training data representation, proxy discrimination, feedback loops, performance disparities across groups, intersectional effects.
Likelihood Assessment
Assess the probability of identified risks materializing.
- checklist Likelihood Factors — Evaluate each factor. More complex systems with less oversight and higher volume carry higher likelihood.
- select Overall Likelihood Rating — Use a 5-point scale. Document the rationale for the selected rating.
- textarea Likelihood Rationale — Explain why you chose this rating. Reference specific evidence from testing, monitoring, or comparable systems.
Controls Assessment
Document existing controls and planned mitigations.
- table Existing Controls — List all controls currently in place. Rate effectiveness as: Strong / Adequate / Partial / Weak / None. Reference evidence of operation.
- textarea Control Gaps — Identify where controls are missing or insufficient. Each gap should map to a specific risk dimension.
- table Planned Mitigations — For each control gap, define a specific mitigation action with owner, target date, and current status.
Residual Risk and Decision
Calculate residual risk after controls and make a governance decision.
- text Inherent Risk Score — Combine impact and likelihood using your risk matrix. This is the risk before controls.
- select Control Effectiveness — Overall assessment of how well existing controls mitigate the identified risks.
- text Residual Risk Score — The remaining risk after controls are applied. This is the risk the organization accepts.
- select Risk Appetite Alignment — Compare residual risk against the organization's stated risk appetite for AI systems.
- select Governance Decision — The formal decision based on the risk assessment. "Approved with conditions" requires specific conditions to be documented.
- checklist Escalation Triggers — Define specific conditions that trigger re-assessment. These should be measurable and monitorable.
FAQs (3)
Who should be involved in an AI risk assessment?
At minimum: the business owner (understands the use case context), a data scientist (understands the technical implementation), a risk/compliance professional (understands the risk framework), and legal counsel (understands regulatory implications). For high-risk systems, include affected stakeholder representatives and external domain experts.
How is this different from a standard IT risk assessment?
AI risk assessments must address AI-specific risks that IT risk assessments typically miss: bias and fairness, model drift, training data quality, explainability requirements, emergent behaviors, and the unique liability questions around AI-generated outputs. This template adds these dimensions while maintaining compatibility with standard risk frameworks.
How often should risk assessments be refreshed?
Trigger-based: reassess when the system changes materially (new data sources, retrained model, expanded scope, new regulations). Cadence-based: critical systems quarterly, high-risk semi-annually, others annually. Always reassess after an incident involving the system.
ea-004 Produce
Model Card Template
ML Model Documentation Standard
A comprehensive model documentation template covering purpose, training data, performance metrics, limitations, ethical considerations, and deployment constraints. Based on the model cards concept introduced by Mitchell et al. (2019), extended for enterprise governance requirements.
Standards Mapping
NIST AI RMF MEASURE 2.5 NIST AI RMF MAP 1.5 EU AI Act Article 11 EU AI Act Annex IV ISO 42001 Annex A.4
COMPEL Domains
D7: MLOps D5: Use Case Management D17: Risk Management
Template sections (5)
Model Overview
High-level description of the model and its purpose.
- text Model Name — Include version number. Use the same name consistently across all governance documents.
- text Model ID — Link to the AI System Registry entry.
- text Model Type — Specify the algorithm family and specific implementation.
- textarea Purpose — State the intended use case clearly. What business decision does this model support? What action does it trigger?
- textarea Intended Users — List all teams and roles that consume this model's outputs, directly or indirectly.
- textarea Out-of-Scope Uses — Explicitly state what this model should NOT be used for. This prevents scope creep and misuse.
Training Data
Documentation of data used to train, validate, and test the model.
- textarea Training Data Sources — List every data source with time period and volume. This is required for EU AI Act Annex IV compliance.
- textarea Data Preprocessing — Document all transformations applied to the raw data. This supports reproducibility and auditability.
- table Data Splits — Use temporal splits for time-series data. Document the split methodology and ensure no data leakage.
- textarea Known Data Limitations — Be honest about data gaps. These directly affect model reliability and fairness assessment.
- textarea Personal Data Statement — If personal data is used, state the legal basis, minimization measures, and reference the Data Protection Impact Assessment.
Performance Metrics
Quantitative evaluation of model performance across key metrics.
- table Primary Metrics — Report metrics across all data splits AND production. The gap between test and production performance is critical to monitor.
- table Fairness Metrics — Report key metrics broken down by protected characteristics. Flag any group with >20% performance disparity (4/5ths rule) or disparate impact ratio <0.8.
- textarea Performance Thresholds — Define the boundaries within which this model is acceptable. These become your drift detection triggers.
- textarea Benchmark Comparison — Show how this model compares to alternatives. This justifies the model choice and complexity.
Limitations and Ethical Considerations
Document known limitations, risks, and ethical considerations.
- textarea Known Limitations — Be thorough and honest. Undisclosed limitations are a governance and liability risk.
- textarea Ethical Considerations — Consider: fairness impacts, informed consent, autonomy, power asymmetries, and unintended consequences.
- textarea Failure Modes — Document how the model can fail and what happens when it does. Include mitigations for each failure mode.
Deployment and Monitoring
Operational deployment details and ongoing monitoring plan.
- text Deployment Environment — Document where and how the model runs in production.
- table Monitoring Plan — Define what is monitored, how often, and what happens when thresholds are breached.
- text Retraining Schedule — Document both scheduled and triggered retraining. Each retrained version needs a new model card version.
- textarea Rollback Plan — Every model deployment must have a tested rollback procedure.
FAQs (3)
What is the relationship between a model card and the AI System Registry?
The AI System Registry is the high-level inventory (what AI systems exist). The model card is the detailed technical documentation for each ML model. The registry entry links to the model card. One registry entry may have multiple model cards if the system uses multiple models.
Who is responsible for maintaining the model card?
The technical owner (typically the data scientist or ML engineer) creates and updates the model card. The business owner reviews it for accuracy of the business context sections. The model card should be updated with every model version change, significant data update, or monitoring threshold breach.
Is a model card required for third-party AI / vendor models?
Yes, but with adaptations. For vendor models where you do not have access to training data or architecture details, document what the vendor has disclosed and note gaps. Your procurement questionnaire (see AI Procurement Questionnaire template) should require vendors to provide model card-equivalent information.
ea-005 Model
Human Oversight Procedure
Decision Authority and Escalation Framework
A procedural document defining decision authority levels, escalation paths, override mechanisms, monitoring frequency, and intervention triggers for AI systems. Required by the EU AI Act for high-risk systems and a cornerstone of responsible AI governance.
Standards Mapping
EU AI Act Article 14 EU AI Act Article 14(4) ISO 42001 Annex A.8 ISO 42001 Clause 8.1 NIST AI RMF GOVERN 1.4
COMPEL Domains
D18: Governance Structure D15: Ethics & Fairness D17: Risk Management
Template sections (4)
Oversight Model Definition
Define the type and level of human oversight for the AI system.
- text AI System Reference — Link to the AI System Registry entry.
- select Oversight Model Type — Human-in-the-loop: human approves each decision. Human-on-the-loop: human monitors and can intervene. Human-in-command: human sets parameters, system operates within them. Select based on risk classification.
- textarea Justification — Explain why this oversight model is appropriate given the system's risk level, decision volume, and impact. This is an auditable justification.
- textarea Regulatory Requirement — Reference specific regulatory requirements that mandate human oversight for this system.
Decision Authority Matrix
Define who has authority to make decisions at each level.
- table Authority Levels — Define clear authority levels with specific decision rights at each level. This prevents both under- and over-escalation.
- textarea Override Authority — The ability to override AI outputs is a core human oversight requirement. Define who can override, under what conditions, and what must be documented.
- textarea Delegation Rules — Prevent informal delegation that erodes oversight quality. Document delegation rules clearly.
Monitoring and Review Procedures
Ongoing monitoring activities and scheduled reviews.
- checklist Real-Time Monitoring — Define what is monitored in real-time and what alerts are generated. Each monitoring item should have a clear threshold and alerting mechanism.
- table Periodic Review Schedule — Define a multi-layered review schedule that covers technical, ethical, and business dimensions.
- textarea Review Documentation — Define the output of each review and how it is stored. This creates the audit trail required by ISO 42001.
Intervention and Escalation
Procedures for intervening in system operation.
- checklist Intervention Triggers — List specific, measurable conditions that require human intervention. Avoid vague triggers like "when needed."
- table Intervention Actions — Map each trigger to a specific action with timeline and required authority level.
- textarea Escalation Pathway — Define the step-by-step escalation process. Include communication channels, expected response times, and documentation requirements at each step.
- textarea Fallback Procedure — Every AI system must have a tested fallback procedure. This ensures business continuity when the AI system is unavailable.
FAQs (3)
Does every AI system need a human oversight procedure?
Yes, but the depth varies by risk classification. High-risk systems (EU AI Act Article 6) require comprehensive procedures with human-in-the-loop or human-on-the-loop controls. Lower-risk systems may require only monitoring dashboards and periodic reviews. The oversight model should be proportionate to the system's risk level and impact.
How do we ensure human oversight doesn't create bottlenecks?
Use a tiered approach: human-in-the-loop only for the highest-impact decisions, human-on-the-loop for most decisions with exception-based review, and automated monitoring with human review at defined intervals. Design the oversight model around decision volume and acceptable latency, not just risk aversion.
What training do human overseers need?
At minimum: understanding of what the AI system does, how to interpret its outputs, common failure modes, how to exercise override authority, and the escalation procedure. ISO 42001 Clause 7.2 requires competence, and EU AI Act Article 14(4)(a) requires overseers to "fully understand the capacities and limitations of the high-risk AI system."
ea-006 Organize
AI Governance RACI
Responsibility Assignment Matrix for AI Governance
A responsibility assignment matrix (RACI) for AI governance activities across the Center of Excellence, business units, IT, legal, compliance, and ethics board. Eliminates ambiguity about who is Responsible, Accountable, Consulted, and Informed for each governance activity.
Standards Mapping
ISO 42001 Clause 5.3 ISO 42001 Clause 7.1 NIST AI RMF GOVERN 2.1 NIST AI RMF GOVERN 2.2
COMPEL Domains
D18: Governance Structure D1: Leadership Sponsorship D2: Talent Strategy
Template sections (4)
RACI Matrix Header
Define the governance roles and bodies included in the RACI matrix.
- text Document Title — Use a standard naming convention.
- textarea Governance Roles — List all organizational roles/bodies involved in AI governance. Customize based on your organizational structure. Each role becomes a column in the RACI matrix.
- textarea RACI Legend — Standard RACI definitions. The key rule: only ONE role can be Accountable for each activity.
Strategy and Governance Activities
RACI assignments for strategic AI governance activities.
- table AI Strategy Definition — Strategic activities are typically Accountable at executive level, with the Governance Board Responsible for execution.
- table Program Management — Program management activities are typically owned by the CoE with business unit collaboration.
AI Lifecycle Governance Activities
RACI assignments for governance activities across the AI system lifecycle.
- table Development Phase — Development governance ensures systems are assessed and documented before deployment.
- table Deployment Phase — Higher risk tiers require higher-level approval authority.
- table Operations Phase — Operations governance ensures systems remain compliant and effective throughout their lifecycle.
Compliance and Audit Activities
RACI assignments for compliance, audit, and regulatory activities.
- table Compliance Activities — Legal/compliance monitors requirements, CoE implements controls, Audit validates effectiveness.
- table Training & Awareness — Training activities typically span CoE (content) and HR (delivery/tracking).
FAQs (3)
What if our organization does not have a dedicated AI CoE?
Map CoE responsibilities to the function that currently owns AI governance — this might be IT governance, enterprise risk management, or a cross-functional working group. The RACI template works regardless of organizational structure; the key is that every activity has exactly one Accountable role.
How do we handle RACI conflicts between business units?
When multiple business units use the same AI system, designate one as the primary Accountable owner (typically the unit with the highest-risk use case or the largest user base). Other business units are Responsible for their specific use case compliance. The CoE arbitrates disputes.
Should the RACI be different for different risk tiers?
Yes. Higher risk tiers typically push accountability to higher organizational levels and add more Consulted roles. The template shows this with separate deployment approval rows for Tier 1-2 vs. Tier 3-4. Customize the matrix based on your risk tier definitions.
ea-007 Evaluate
ISO 42001 Readiness Checklist
Clause-by-Clause Self-Assessment
A comprehensive self-assessment checklist covering all ISO/IEC 42001:2023 requirements (Clauses 4-10 and Annex A controls). Enables organizations to evaluate their readiness for ISO 42001 certification and identify gaps requiring remediation.
Standards Mapping
ISO 42001 Clauses 4-10 ISO 42001 Annex A ISO 42001 Annex B
COMPEL Domains
D16: Regulatory Compliance D18: Governance Structure D9: Continuous Improvement
Template sections (5)
Clause 4: Context of the Organization
Requirements for understanding the organization and its context in relation to AI.
- checklist 4.1 Understanding the organization and its context — Assess whether you have formally documented the organizational context for AI management. This includes regulatory environment, market expectations, and internal capability assessment.
- checklist 4.2 Understanding the needs of interested parties — Interested parties include: regulators, customers, employees, shareholders, affected communities, and AI system users. Document their requirements and how they are met.
- checklist 4.3 Scope of the AIMS — The scope must be clear about which AI systems, business units, and processes are covered. Exclusions must be justified.
- checklist 4.4 AI Management System — The AIMS must be established, implemented, maintained, and continually improved. This is the overarching requirement.
Clause 5: Leadership
Requirements for leadership commitment and organizational direction.
- checklist 5.1 Leadership and commitment — Top management must be visibly committed. Evidence includes: meeting minutes, resource allocation decisions, policy sign-off, and communication records.
- checklist 5.2 AI Policy — Use the AI Policy Template (EA-001) to create the policy. Verify it meets all ISO 42001 5.2 requirements.
- checklist 5.3 Organizational roles, responsibilities, and authorities — Use the RACI Matrix (EA-006) to document role assignments. Ensure reporting lines to top management are clear.
Clauses 6-7: Planning and Support
Requirements for AIMS planning, risk management, and support resources.
- checklist 6.1 Actions to address risks and opportunities — Use the Risk Assessment Template (EA-003) for AI system-level risk assessments. Organizational-level risks to the AIMS itself must also be addressed.
- checklist 6.2 AI objectives and planning — AI objectives should be SMART (Specific, Measurable, Achievable, Relevant, Time-bound). They should cascade from policy to operational targets.
- checklist 7.1-7.5 Support requirements — Support requirements ensure the organization has the people, skills, awareness, communication, and documentation infrastructure to operate the AIMS.
Clauses 8-10: Operation, Evaluation, and Improvement
Requirements for operating, evaluating, and improving the AIMS.
- checklist 8.1 Operational planning and control — This clause requires that AI governance is not just designed but actually operates. Evidence of operation (logs, reports, meeting minutes) is essential.
- checklist 8.2-8.4 AI system lifecycle — These clauses map to the core AI governance lifecycle: assess impact, treat risks, and document results. Use the Risk Assessment Template and Human Oversight Procedure.
- checklist 9.1-9.3 Performance evaluation — Performance evaluation requires three mechanisms: monitoring/measurement, internal audit, and management review. Each produces documented outputs.
- checklist 10.1-10.2 Improvement — The improvement clause requires a systematic approach to non-conformity management. Track non-conformities, corrective actions, and their effectiveness.
Annex A Controls Assessment
Self-assessment of ISO 42001 Annex A reference controls.
- checklist A.2-A.4: Policy, Resources, and System Design — These controls establish the foundation. A.2 aligns with the AI Policy, A.3 with the RACI, A.4 with Model Cards and system documentation.
- checklist A.5-A.7: Risk, Impact, and Lifecycle — These controls address the core governance activities. They map directly to the Risk Assessment Template, Model Card, and Human Oversight Procedure.
- checklist A.8-A.10: Operations, Third Parties, and Data — A.8 maps to the Human Oversight Procedure, A.9 to the Procurement Questionnaire, A.10 to data governance documentation.
FAQs (3)
Can we use this checklist as evidence for ISO 42001 certification?
This checklist is a self-assessment tool to identify gaps and prioritize remediation. It is NOT a substitute for a formal internal audit or third-party certification audit. However, a completed and maintained checklist demonstrates systematic gap analysis, which auditors view favorably. Use it to prepare for formal audits, not to replace them.
What is the typical timeline from gap assessment to certification?
Organizations with mature quality management systems (e.g., existing ISO 9001 or 27001 certification) typically achieve ISO 42001 readiness in 6-12 months. Organizations building from scratch may need 12-18 months. The timeline depends on the number and severity of gaps, available resources, and the scope of AI systems covered.
Which Annex A controls are most commonly failed in audits?
Based on early certification experience, the most challenging areas are: A.5 (comprehensive AI risk assessment beyond standard IT risks), A.6 (formal impact assessment on individuals and society), A.8 (documented human oversight procedures with evidence of operation), and A.10 (data quality assurance with documented provenance). These require operational evidence, not just documentation.
ea-008 Model
AI Procurement Questionnaire
Vendor AI Governance Assessment
A structured questionnaire for assessing vendor AI governance capabilities during procurement. Covers data practices, model transparency, bias testing, security controls, compliance claims, and SLA terms. Ensures third-party AI meets the same governance standards as internally developed systems.
Standards Mapping
ISO 42001 Annex A.9 ISO 42001 Clause 8.1 NIST AI RMF GOVERN 6.1 EU AI Act Article 28
COMPEL Domains
D16: Regulatory Compliance D17: Risk Management D13: Security Hardening
Template sections (5)
Vendor Profile
Basic information about the vendor and the AI product/service.
- text Vendor Name — Legal entity name as registered.
- text Product/Service Name — Specific product or service being evaluated, not just the vendor.
- textarea AI Capabilities Description — Request a detailed description. "AI-powered" is insufficient — you need to understand what the AI actually does.
- select Deployment Model — The deployment model affects data residency, security controls, and your ability to monitor the AI system.
- textarea Data Processing Locations — Critical for GDPR compliance and data sovereignty requirements.
Data Practices
How the vendor handles data used in AI systems.
- checklist Training Data Usage — This is a critical governance question. Many vendors use customer data to improve models without explicit consent. Your data governance policy likely has requirements here.
- textarea Data Retention — Ensure alignment with your data retention policies and regulatory requirements. Include model outputs and inference logs, not just input data.
- checklist Data Security — Request evidence of certifications, not just claims. SOC 2 Type II reports and ISO 27001 certificates should be provided.
Model Transparency and Performance
Vendor transparency about AI model design, performance, and limitations.
- checklist Model Documentation — Request the vendor's model card. If they cannot provide one, this is a red flag. Use the Model Card Template (EA-004) as a reference for what should be documented.
- textarea Explainability — Explainability requirements depend on your use case. High-risk applications (credit, hiring, healthcare) typically require individual-level explanations.
- textarea Performance Guarantees — Performance SLAs should specify metrics, measurement methodology, reporting frequency, and remedies for underperformance.
Bias Testing and Fairness
Vendor's approach to bias detection, testing, and mitigation.
- checklist Bias Testing — Request recent bias testing reports. Ask about testing methodology — not all approaches are equally rigorous.
- textarea Fairness Metrics — The choice of fairness metric matters. Different metrics can conflict. Ask the vendor which metric they optimize for and why.
- textarea Incident Reporting — A vendor that has never found bias has likely not looked hard enough. Past incident disclosure demonstrates maturity, not weakness.
Compliance and Contractual Terms
Regulatory compliance claims and contractual governance provisions.
- checklist Regulatory Compliance Claims — Require evidence for every compliance claim. "Compliant" without certification or documentation is insufficient.
- textarea Right to Audit — Your ISO 42001 obligations extend to outsourced processes. You need contractual rights to verify vendor governance claims.
- textarea Liability and Indemnification — AI liability allocation is critical. Standard limitation of liability clauses may not adequately cover AI-specific risks (bias, automated decisions, regulatory penalties).
- textarea Exit Provisions — Vendor lock-in is a significant risk for AI systems. Ensure exit provisions cover data portability, model portability, and transition support.
FAQs (3)
Should every AI vendor complete this questionnaire?
Yes, but proportionate to risk. For low-risk AI tools (e.g., grammar checking), a shortened version covering data practices and security may suffice. For high-risk AI (e.g., credit scoring, healthcare diagnostics), the full questionnaire is essential. The risk tier determines the minimum required sections.
What if a vendor refuses to answer certain questions?
Vendor unwillingness to disclose is itself a risk indicator. Document the refusal and assess whether the information gap creates unacceptable risk. For high-risk applications, inability to verify vendor governance claims may be a disqualifying factor. Consider whether alternative vendors provide greater transparency.
How does this relate to our existing vendor risk management process?
This questionnaire supplements (not replaces) your existing vendor risk management process. It adds AI-specific questions that standard vendor assessments miss. Ideally, integrate these questions into your procurement workflow so they are triggered automatically when a vendor product includes AI capabilities.
ea-009 Evaluate
Audit Evidence Checklist
Evidence Collection Guide for AI Governance Audits
A comprehensive evidence collection guide for AI governance audits covering documentation, controls, testing records, incident logs, training records, and management review outputs. Organized by ISO 42001 clause to support certification audits and internal assurance.
Standards Mapping
ISO 42001 Clause 9.2 ISO 42001 Clause 7.5 ISO 42001 Clause 9.1 NIST AI RMF GOVERN 4.1
COMPEL Domains
D16: Regulatory Compliance D18: Governance Structure D9: Continuous Improvement
Template sections (5)
Governance Documentation
Core governance documents that auditors will request.
- checklist Policy Documents — Each document must be: (a) current version, (b) formally approved, (c) communicated to relevant parties, (d) accessible, and (e) reviewed within its review cycle. Auditors check all five.
- checklist Organizational Structure Documents — Demonstrate that governance roles are formally defined, not just informally understood. Auditors look for documented authority, not just practice.
- checklist Scope and Context Documents — These documents define the boundaries of the management system. The scope statement is particularly important — auditors will test whether governance actually covers everything within scope.
Operational Evidence
Evidence that governance processes are actually operating, not just documented.
- checklist Risk Management Evidence — Auditors want to see that risk assessments are not just created but actively maintained. Look for evidence of reviews, updates, and management engagement.
- checklist AI System Lifecycle Evidence — The lifecycle evidence trail must be continuous. Gaps (e.g., a system deployed without approval documentation) are audit findings.
- checklist Vendor Management Evidence — ISO 42001 extends to outsourced AI processes. Auditors will check that vendor AI is governed to the same standard as internal AI.
Testing and Monitoring Records
Evidence of ongoing testing, monitoring, and performance evaluation.
- checklist Performance Testing — Testing should be regular (not just pre-deployment) and documented with results, findings, and actions taken.
- checklist Monitoring Records — Continuous monitoring evidence shows the AIMS is operational. Auditors expect to see trend data, not just point-in-time snapshots.
- checklist Internal Audit Records — Internal audits must be conducted by competent, independent auditors. The audit program should cover all AIMS processes over a defined cycle.
Incident and Improvement Records
Evidence of incident management, non-conformity handling, and continual improvement.
- checklist Incident Records — An empty incident log is a red flag for auditors — it suggests incidents are not being detected or reported, not that they do not exist.
- checklist Management Review Records — Management reviews must be at planned intervals (typically quarterly or semi-annually). Meeting minutes must show that the required inputs were considered and decisions were made.
- checklist Continual Improvement Evidence — Demonstrate that the AIMS is improving, not static. Trend data, maturity scores over time, and process improvements all serve as evidence.
Training and Awareness Records
Evidence that people are competent and aware of their AI governance responsibilities.
- checklist Competency Evidence — ISO 42001 Clause 7.2 requires that persons are competent based on education, training, or experience. Keep records of all training and competency assessments.
- checklist Awareness Evidence — Awareness goes beyond training — it includes ongoing communication to ensure people understand and remember their responsibilities.
FAQs (3)
How should evidence be organized for an ISO 42001 audit?
Organize evidence by ISO 42001 clause number for certification audits. Maintain a master evidence index that maps each clause/control to the specific evidence location (document management system, SharePoint, or governance platform). This checklist serves as that mapping tool. Auditors appreciate well-organized evidence — it demonstrates management system maturity.
What is the most common audit finding for first-time ISO 42001 audits?
The most common finding is "documented but not implemented" — organizations create policies and procedures but cannot demonstrate they are consistently followed. Evidence of operation (logs, meeting minutes, signed approvals, monitoring outputs) is more important than document quality. A simple process that works beats an elegant process that is only on paper.
How far back should evidence records be retained?
ISO 42001 requires documented information to be controlled but does not specify retention periods. Best practice: retain operational records for at least 3 years (covering two full audit cycles), incident records for 5+ years, and policy version history indefinitely. Align with your organization's records management policy and any regulatory requirements.