Skip to main content

Industry Knowledge Base

Financial Services AI Governance

Regulatory landscape, sector risk overlays, compliance checklist, and COMPEL methodology mappings for AI systems used across banking, insurance, capital markets, and payments. Covers EU AI Act, DORA, SEC, Basel, FCA, PCI-DSS, and SR 11-7. Current as of April 2026.

7

Regulatory Requirements

6

Risk Overlays

16

Compliance Items

6

COMPEL Stage Maps

Regulatory Landscape

Financial services AI governance sits at the intersection of model risk management (SR 11-7, Basel), AI-specific regulation (EU AI Act), operational resilience (DORA), market conduct (SEC, FCA), and payment security (PCI-DSS). Each regime below adds distinct obligations.

European Commission effective Effective 2025-08-02

EU AI Act High-Risk Classification for Financial Services

EU AI Act (Regulation 2024/1689) Annex III classifies AI systems used in credit scoring, creditworthiness assessment, insurance pricing, and fraud detection as high-risk. Financial institutions deploying these systems face mandatory conformity assessment, risk management, data governance, transparency, and human oversight requirements.

Key provisions, impacted AI types, and compliance actions

Key Provisions

  • - Annex III Section 5(b): AI systems evaluating creditworthiness or establishing credit scores are high-risk
  • - Annex III Section 5(a): AI systems determining access to essential financial services (insurance, banking) are high-risk
  • - Mandatory risk management system (Article 9) with continuous monitoring
  • - Data governance requirements (Article 10) for training datasets including bias assessment
  • - Technical documentation (Article 11) and record-keeping (Article 12) requirements
  • - Transparency obligations (Article 13) — users must understand AI output
  • - Human oversight mechanisms (Article 14) for credit and insurance decisions
  • - Accuracy, robustness, and cybersecurity requirements (Article 15)

Impacted AI Types

  • - Credit scoring and creditworthiness assessment
  • - Insurance risk pricing and underwriting
  • - Fraud detection and anti-money laundering
  • - Customer risk profiling
  • - Automated lending decisions

Compliance Actions

  • - Map all AI systems against Annex III high-risk categories
  • - Implement Article 9 risk management system for each high-risk AI
  • - Conduct Article 10 data governance assessments for training datasets
  • - Create technical documentation packages per Article 11
  • - Design human oversight mechanisms per Article 14
  • - Register high-risk AI systems in EU database per Article 49
European Commission effective Effective 2025-01-17

Digital Operational Resilience Act (DORA)

DORA (Regulation 2022/2554) establishes uniform requirements for ICT risk management, incident reporting, operational resilience testing, and third-party risk management for financial entities. AI systems are explicitly within scope as critical ICT services requiring operational resilience governance.

Key provisions, impacted AI types, and compliance actions

Key Provisions

  • - ICT risk management framework (Chapter II) applies to all AI systems in financial operations
  • - ICT incident classification and reporting (Chapter III) — AI system failures are reportable incidents
  • - Digital operational resilience testing (Chapter IV) — AI systems must undergo threat-led penetration testing
  • - Third-party ICT risk management (Chapter V) — AI vendor concentration risk assessment required
  • - Critical ICT third-party provider oversight (Chapter V, Section II) — cloud AI providers may be designated critical
  • - Information sharing (Chapter VI) — intelligence sharing on AI-related threats

Impacted AI Types

  • - All AI systems in financial institution operations
  • - Cloud-hosted AI inference services
  • - Third-party AI models and APIs
  • - AI-powered trading systems
  • - Customer-facing AI services

Compliance Actions

  • - Include AI systems in ICT risk management framework
  • - Classify AI system failures in incident reporting taxonomy
  • - Include AI systems in operational resilience testing program
  • - Assess AI vendor concentration risk across critical functions
  • - Establish AI system recovery time objectives and procedures
  • - Implement AI-specific threat intelligence monitoring
SEC effective Effective 2024-03-01

SEC AI Oversight and AI-Washing Enforcement

SEC enforcement actions and guidance addressing AI use in investment advice, trading, and public disclosures. Includes enforcement against "AI washing" (misleading claims about AI capabilities) and proposed rules on predictive analytics and conflicts of interest in broker-dealer and investment adviser AI systems.

Key provisions, impacted AI types, and compliance actions

Key Provisions

  • - Enforcement against misleading AI claims in public disclosures and marketing materials
  • - Fiduciary duty extends to AI-driven investment advice and portfolio management
  • - Proposed Rule: Conflicts of Interest Associated with the Use of Predictive Data Analytics
  • - Examination priorities include AI use in trading, compliance, and customer interactions
  • - AI model governance expectations for registered investment advisers
  • - Recordkeeping requirements for AI-assisted investment decisions

Impacted AI Types

  • - AI-driven investment advice and robo-advisors
  • - Algorithmic and AI-powered trading systems
  • - AI in compliance monitoring and surveillance
  • - Customer-facing AI interactions (chatbots, recommendations)
  • - AI in public company financial disclosures

Compliance Actions

  • - Review all public AI claims for accuracy and substantiation
  • - Document AI model governance for investment decision systems
  • - Implement conflict-of-interest assessment for predictive analytics
  • - Establish recordkeeping protocols for AI-assisted decisions
  • - Include AI governance in SEC examination readiness program
  • - Train compliance teams on AI-specific regulatory expectations
Basel Committee on Banking Supervision guidance Effective 2024-06-01

Basel Committee AI/ML Guidance for Banking

Basel Committee consultative documents and guidance on AI/ML applications in banking, addressing model risk management, operational risk, and supervisory expectations for AI-driven credit risk models, stress testing, and market risk calculations.

Key provisions, impacted AI types, and compliance actions

Key Provisions

  • - AI/ML models used for regulatory capital calculations subject to enhanced validation
  • - Model risk management expectations aligned with SR 11-7 principles
  • - Explainability requirements for AI models used in supervisory reporting
  • - Data quality standards for AI training data used in risk models
  • - Governance expectations including AI-specific model inventory and tiering
  • - Stress testing requirements for AI model robustness

Impacted AI Types

  • - Credit risk AI models (PD, LGD, EAD)
  • - Market risk AI models
  • - Operational risk AI analytics
  • - Stress testing AI models
  • - Regulatory reporting AI systems

Compliance Actions

  • - Tier AI models by materiality and regulatory impact
  • - Apply enhanced validation for AI models in capital calculations
  • - Document explainability analysis for supervisory AI models
  • - Establish AI-specific stress testing scenarios
  • - Align AI model governance with Basel Pillar 2 expectations
FCA (UK Financial Conduct Authority) guidance Effective 2024-01-01

FCA AI Principles for Financial Services

FCA discussion papers and guidance on AI use in UK financial services, establishing expectations for consumer protection, market integrity, and competition in the context of AI-driven financial services.

Key provisions, impacted AI types, and compliance actions

Key Provisions

  • - Consumer Duty applies to AI-driven product recommendations and pricing
  • - AI must not create unfair outcomes for vulnerable customers
  • - Market integrity expectations for AI trading systems
  • - Senior Management Functions (SMF) accountability for AI outcomes
  • - AI model governance expectations aligned with SS1/23 (model risk management)
  • - Third-party AI model oversight expectations

Impacted AI Types

  • - AI in customer product recommendations
  • - AI pricing and personalization engines
  • - Algorithmic trading systems
  • - AI in claims handling and underwriting
  • - AI-powered financial advice

Compliance Actions

  • - Map Consumer Duty requirements to AI-driven customer interactions
  • - Assess AI pricing models for vulnerable customer impact
  • - Designate SMF accountable for AI governance outcomes
  • - Implement SS1/23 model risk management for AI models
  • - Conduct third-party AI model due diligence
PCI SSC effective Effective 2024-03-31

PCI-DSS Intersection with AI in Payment Systems

PCI-DSS v4.0.1 compliance requirements as they intersect with AI systems processing, storing, or transmitting cardholder data. AI-specific considerations for tokenization, fraud detection, and payment processing automation.

Key provisions, impacted AI types, and compliance actions

Key Provisions

  • - AI systems processing cardholder data are in scope for PCI-DSS assessment
  • - AI fraud detection systems must comply with data storage and encryption requirements
  • - AI model training on transaction data requires data minimization controls
  • - Customized approach allows AI-driven security controls if meeting control objectives
  • - Third-party AI payment processors require Level 1 service provider assessment
  • - AI system access controls must meet PCI-DSS strong authentication requirements

Impacted AI Types

  • - AI fraud detection in payment processing
  • - AI transaction monitoring and risk scoring
  • - AI-powered payment authorization
  • - Chatbots handling payment information
  • - AI in chargeback and dispute resolution

Compliance Actions

  • - Include AI systems in PCI-DSS scope assessment
  • - Verify AI training data does not retain prohibited cardholder data elements
  • - Validate AI fraud detection complies with encryption at rest and in transit
  • - Assess AI vendors as service providers under PCI-DSS
  • - Document AI system access controls in PCI-DSS evidence package
Federal Reserve / OCC effective Effective 2011-04-04

SR 11-7: Supervisory Guidance on Model Risk Management

Federal Reserve SR 11-7 and OCC Bulletin 2011-12 establishing model risk management (MRM) expectations for banking institutions. AI/ML models are explicitly within scope. Updated interpretive guidance emphasizes AI-specific validation challenges including explainability, concept drift, and data representativeness.

Key provisions, impacted AI types, and compliance actions

Key Provisions

  • - All AI/ML models must be included in the model inventory and risk-tiered
  • - Model validation must address AI-specific risks (opacity, instability, data dependence)
  • - Three lines of defense: model development, validation, and audit
  • - Ongoing monitoring requirements including performance, stability, and conceptual soundness
  • - Model risk appetite and limits must account for AI-specific uncertainty
  • - Board and senior management oversight of material AI model risks

Impacted AI Types

  • - All AI/ML models used in banking operations
  • - Credit decision models
  • - Fraud and AML models
  • - Market risk and pricing models
  • - Operational risk models
  • - Regulatory capital models

Compliance Actions

  • - Include all AI/ML models in enterprise model inventory
  • - Apply risk tiering to AI models based on materiality and complexity
  • - Establish independent model validation for AI systems
  • - Implement AI-specific monitoring for drift, stability, and performance
  • - Document explainability analysis for each AI model
  • - Report material AI model risks to board risk committee

Sector Risk Overlays

Financial services AI risk categories layered over general AI governance. Each overlay identifies the use class, EU AI Act classification, example systems, required mitigations, and owning COMPEL stages.

Credit Scoring and Lending

high risk

AI systems used in credit scoring, creditworthiness assessment, and automated lending decisions are classified as high-risk under the EU AI Act and subject to fair lending laws globally.

Example Systems

  • - AI-driven credit scoring models
  • - Automated loan origination decisions
  • - Credit limit optimization algorithms
  • - Debt collection prioritization
  • - Alternative data credit assessment

Mitigation Requirements

  • - Fair lending bias testing across protected classes
  • - Adverse action explanation capability
  • - Model explainability documentation for regulators
  • - Independent model validation per SR 11-7
  • - Human override capability for credit decisions
  • - Performance monitoring with demographic breakdowns
COMPEL stages: Calibrate Model Evaluate

Fraud Detection and AML/KYC

high risk

AI systems for fraud detection, anti-money laundering, and know-your-customer processes. High false positive rates create customer friction; false negatives create regulatory and financial risk.

Example Systems

  • - Real-time transaction fraud detection
  • - Anti-money laundering transaction monitoring
  • - Customer identity verification (KYC)
  • - Suspicious activity report generation
  • - Sanctions screening automation

Mitigation Requirements

  • - False positive and false negative rate monitoring by customer segment
  • - Alert investigation audit trail
  • - Model tuning governance with regulatory impact assessment
  • - Sanctions list update latency monitoring
  • - Regulatory examination readiness documentation
  • - Customer impact assessment for false positive scenarios
COMPEL stages: Model Produce Evaluate

Algorithmic and AI-Powered Trading

high risk

AI systems making or assisting trading decisions, including execution algorithms, predictive models, and market-making automation.

Example Systems

  • - AI execution algorithms
  • - Predictive trading models
  • - AI market-making systems
  • - Sentiment analysis for trading signals
  • - Portfolio rebalancing automation

Mitigation Requirements

  • - Kill switch and circuit breaker mechanisms
  • - Pre-trade risk controls and position limits
  • - Market impact monitoring and reporting
  • - Back-testing and stress testing framework
  • - Regulatory reporting for algorithmic trading (MiFID II/Reg SCI)
  • - Model inventory and change governance
COMPEL stages: Model Produce Evaluate

Insurance Underwriting and Pricing

high risk

AI systems determining insurance premiums, underwriting decisions, and claims processing. Subject to fair pricing requirements and anti-discrimination laws.

Example Systems

  • - AI-driven premium pricing models
  • - Automated underwriting decisions
  • - Claims fraud detection
  • - Claims triage and processing automation
  • - Risk pool segmentation

Mitigation Requirements

  • - Actuarial review of AI pricing models
  • - Non-discrimination testing across protected characteristics
  • - Policyholder explanation capability for AI-driven decisions
  • - Claims processing audit trail with human review option
  • - Model validation aligned with actuarial standards
COMPEL stages: Calibrate Model Evaluate

Customer-Facing AI

limited risk

AI systems interacting directly with customers including robo-advisors, chatbots, and personalized product recommendations.

Example Systems

  • - Robo-advisory platforms
  • - AI-powered financial chatbots
  • - Personalized product recommendations
  • - Customer service automation
  • - Financial wellness tools

Mitigation Requirements

  • - Transparency: disclose AI nature of interaction
  • - Suitability assessment for AI investment advice
  • - Vulnerable customer identification and escalation
  • - Complaint handling for AI-driven interactions
  • - Consumer Duty compliance assessment (FCA)
  • - Fiduciary duty compliance for investment advice (SEC)
COMPEL stages: Model Produce

AI Operational Resilience

high risk

Operational resilience requirements for AI systems under DORA and domestic operational resilience frameworks, covering availability, recovery, and third-party concentration risk.

Example Systems

  • - AI systems supporting critical business services
  • - Cloud-hosted AI model inference
  • - Third-party AI model dependencies
  • - AI in core banking operations
  • - AI in payment processing infrastructure

Mitigation Requirements

  • - Impact tolerance definition for AI system outages
  • - Recovery time and point objectives for AI services
  • - AI vendor concentration risk assessment
  • - AI system resilience testing (scenario and penetration)
  • - DORA incident reporting for AI system failures
  • - Exit strategy for critical AI third-party providers
COMPEL stages: Organize Produce Evaluate

Compliance Checklist

Prioritized obligations grouped by domain. Each item identifies the regulatory source, the COMPEL stage that owns the work, and the evidence a financial institution should maintain.

Model Inventory & Governance (2)

fcc-001 critical Calibrate

Maintain AI/ML model inventory

Catalog all AI/ML models across the organization with risk tiering, ownership, and lifecycle status aligned with SR 11-7 expectations.

Regulatory source: SR 11-7; EU AI Act Article 49; DORA Chapter II

Evidence required: AI model inventory register; Risk tiering methodology; Model ownership assignments

fcc-002 critical Calibrate

EU AI Act high-risk system registration

Register all high-risk AI systems in the EU AI Act database, including credit scoring, AML, and insurance underwriting systems.

Regulatory source: EU AI Act Article 49

Evidence required: EU database registration confirmations; High-risk classification decisions; Conformity assessment documentation

Model Validation (1)

fcc-003 critical Model

Independent model validation program

Establish independent validation for all material AI models covering conceptual soundness, data integrity, outcome analysis, and ongoing monitoring.

Regulatory source: SR 11-7; Basel Committee AI guidance; FCA SS1/23

Evidence required: Validation policy and procedures; Validation reports per model; Findings tracking and remediation

Fair Lending & Anti-Discrimination (1)

fcc-004 critical Evaluate

AI fair lending compliance testing

Test all credit-related AI models for disparate impact across protected classes and document adverse action explanation capability.

Regulatory source: ECOA; Fair Housing Act; EU AI Act Article 10

Evidence required: Fair lending test results; Adverse action explanation samples; Remediation documentation for disparities

Operational Resilience (2)

fcc-005 critical Produce

AI system resilience testing

Include AI systems in operational resilience testing program, including scenario testing, threat-led penetration testing, and recovery testing.

Regulatory source: DORA Chapter IV; PRA operational resilience; FRB operational resilience

Evidence required: Resilience testing plans; Test execution results; Remediation actions from test findings

fcc-006 critical Evaluate

DORA ICT incident reporting for AI

Classify and report AI system incidents under DORA ICT incident reporting requirements, including availability, integrity, and confidentiality incidents.

Regulatory source: DORA Chapter III

Evidence required: Incident classification taxonomy (AI-specific); Incident reports; Root cause analysis documentation

Third-Party Risk (1)

fcc-007 high Organize

AI vendor concentration risk assessment

Assess third-party AI vendor concentration risk across critical business services and maintain exit strategies for critical AI providers.

Regulatory source: DORA Chapter V; SR 13-19; EBA outsourcing guidelines

Evidence required: AI vendor inventory; Concentration risk assessment; Exit strategy documentation

Transparency & Explainability (1)

fcc-008 high Model

AI model explainability documentation

Document explainability analysis for all customer-impacting and regulatory AI models, including method selection rationale and limitation disclosure.

Regulatory source: EU AI Act Article 13; SR 11-7; Basel Committee guidance

Evidence required: Explainability methodology documentation; Feature importance analysis; Limitation disclosures

Data Governance (1)

fcc-009 high Organize

AI training data governance

Govern training data for AI models including quality assessment, representativeness validation, and bias detection in training datasets.

Regulatory source: EU AI Act Article 10; SR 11-7; Basel data quality standards

Evidence required: Data quality assessment reports; Training data lineage documentation; Representativeness validation results

Consumer Protection (1)

fcc-010 high Produce

AI Consumer Duty compliance

Assess AI-driven customer interactions against Consumer Duty requirements (FCA) and fiduciary obligations (SEC) for fair outcomes.

Regulatory source: FCA Consumer Duty; SEC fiduciary rule; EU AI Act Article 14

Evidence required: Consumer outcome testing results; Vulnerable customer identification procedures; Complaint analysis for AI interactions

Trading & Markets (1)

fcc-011 high Produce

AI trading system governance

Govern AI trading systems with kill switches, pre-trade risk controls, and regulatory reporting aligned with MiFID II and Reg SCI.

Regulatory source: MiFID II; Reg SCI; SEC market structure rules

Evidence required: Kill switch test results; Pre-trade risk control documentation; Regulatory reporting compliance evidence

AI-Washing Prevention (1)

fcc-012 high Calibrate

AI marketing claims substantiation

Substantiate all public claims about AI capabilities to prevent SEC AI-washing enforcement action.

Regulatory source: SEC enforcement precedent; FTC Act Section 5

Evidence required: AI claims inventory; Substantiation documentation per claim; Marketing review approval records

AML/KYC (1)

fcc-013 critical Model

AI AML/KYC model governance

Govern AI models used in anti-money laundering, know-your-customer, and sanctions screening with specific attention to false positive management and alert investigation quality.

Regulatory source: BSA/AML regulations; EU AMLD6; FinCEN guidance

Evidence required: AML model validation reports; False positive rate analysis; Alert investigation quality reviews

Payment Security (1)

fcc-014 high Produce

PCI-DSS compliance for AI in payments

Ensure AI systems processing cardholder data comply with PCI-DSS v4.0.1 including data minimization, encryption, and service provider assessment.

Regulatory source: PCI-DSS v4.0.1

Evidence required: PCI-DSS scope assessment including AI systems; Encryption validation; Service provider assessment documentation

Board Oversight (1)

fcc-015 high Evaluate

Board reporting on AI model risk

Report material AI model risks to the board risk committee, including model performance, validation findings, and emerging regulatory developments.

Regulatory source: SR 11-7; DORA Chapter II; Corporate governance codes

Evidence required: Board risk committee AI reporting; Material model risk dashboard; Emerging regulation briefings

Governance Structure (1)

fcc-016 critical Organize

Financial services AI governance framework

Establish a three-lines-of-defense AI governance framework with designated Senior Management Function accountability for AI outcomes.

Regulatory source: SR 11-7; FCA SMCR; Basel Committee governance principles

Evidence required: AI governance framework document; SMF designation records; Three-lines-of-defense roles and responsibilities

Maturity Assessment Criteria

Five-level maturity scale for financial services AI governance domains. Use this to benchmark current state and plan a realistic target for each capability.

Model Risk Management

AI model risk management maturity

Degree to which AI/ML model risk management aligns with SR 11-7 expectations and industry best practices.

Level 1

Initial

AI models exist outside the enterprise MRM framework

Level 2

Developing

AI models inventoried but not risk-tiered; validation ad hoc

Level 3

Defined

AI models risk-tiered and validated; governance policies documented

Level 4

Managed

Comprehensive MRM for AI with ongoing monitoring, challenger models, and board reporting

Level 5

Optimized

Integrated AI MRM with automated monitoring, continuous validation, and predictive risk indicators

Regulatory Compliance

Multi-jurisdictional AI regulatory readiness

Capability to manage AI compliance across multiple regulatory regimes (EU AI Act, DORA, SEC, FCA, Basel, PCI-DSS).

Level 1

Initial

No systematic tracking of AI-specific financial regulations

Level 2

Developing

Key regulations identified but compliance gaps unquantified

Level 3

Defined

Compliance matrix maintained; gaps identified; remediation planned

Level 4

Managed

Active compliance monitoring with regulatory change management and impact assessment

Level 5

Optimized

Proactive regulatory intelligence; automated compliance verification; regulatory sandbox participation

Operational Resilience

AI operational resilience

Resilience of AI systems supporting critical financial services, including availability, recovery, and vendor management.

Level 1

Initial

AI systems not included in operational resilience framework

Level 2

Developing

AI systems identified in resilience scope; impact tolerances undefined

Level 3

Defined

Impact tolerances set for AI services; recovery plans documented

Level 4

Managed

Regular resilience testing including AI-specific scenarios; vendor exit plans maintained

Level 5

Optimized

Continuous resilience assurance; automated failover; AI vendor concentration actively managed

Fair Outcomes

AI fairness and consumer protection

Assurance that AI-driven financial decisions produce fair outcomes across customer demographics.

Level 1

Initial

No fairness testing for financial AI systems

Level 2

Developing

Basic demographic analysis for select models; no systematic framework

Level 3

Defined

Fairness testing framework applied to all customer-impacting AI; protected class analysis

Level 4

Managed

Continuous fairness monitoring; automated disparity detection; remediation workflows

Level 5

Optimized

Fairness-by-design in AI development; proactive equity assessment; consumer outcome validation

COMPEL Methodology Mappings

How financial services AI governance activities map onto the six COMPEL stages — Calibrate, Organize, Model, Produce, Evaluate, and Learn — including the artifacts and regulatory alignment for each stage.

Calibrate

Financial Services AI Landscape Assessment

Assess the regulatory landscape, inventory AI/ML models, establish risk tiering, and define AI risk appetite for financial services operations.

Artifacts

  • - AI model inventory
  • - Risk tiering register
  • - Regulatory requirements matrix
  • - AI risk appetite statement

Regulatory Alignment

  • - SR 11-7
  • - EU AI Act Article 49
  • - DORA Chapter II
Organize

Financial AI Governance Structure

Establish three-lines-of-defense AI governance framework, designate SMF accountability, and implement vendor and data governance.

Artifacts

  • - AI governance framework
  • - SMF designation records
  • - AI vendor management framework
  • - Training data governance policy

Regulatory Alignment

  • - SR 11-7
  • - FCA SMCR
  • - DORA Chapter V
  • - EU AI Act Article 10
Model

AI Model Validation and Explainability

Validate AI models for accuracy, fairness, explainability, and regulatory compliance. Design bias testing frameworks and documentation standards.

Artifacts

  • - Model validation reports
  • - Explainability documentation
  • - Fair lending analysis
  • - AML model validation

Regulatory Alignment

  • - SR 11-7
  • - EU AI Act Article 13
  • - Basel Committee guidance
  • - ECOA
Produce

Governed Financial AI Deployment

Deploy financial AI with operational resilience controls, human oversight mechanisms, consumer protection safeguards, and trading system governance.

Artifacts

  • - Deployment gate checklist
  • - Kill switch procedures
  • - Consumer protection controls
  • - DORA resilience test results

Regulatory Alignment

  • - DORA Chapter IV
  • - EU AI Act Article 14
  • - MiFID II
  • - PCI-DSS
Evaluate

Financial AI Performance and Compliance Monitoring

Monitor AI model performance, fairness metrics, regulatory compliance, and operational resilience across financial operations.

Artifacts

  • - Model performance dashboards
  • - Fairness monitoring reports
  • - DORA incident reports
  • - Board risk reporting

Regulatory Alignment

  • - SR 11-7
  • - EU AI Act Article 9
  • - DORA Chapter III
  • - FCA Consumer Duty
Learn

Financial AI Continuous Improvement

Capture lessons from AI model performance, regulatory developments, and incident reviews. Update governance practices and workforce AI capability.

Artifacts

  • - Model revalidation reports
  • - Regulatory change impact assessments
  • - Incident post-mortem reviews
  • - AI literacy training materials

Regulatory Alignment

  • - SR 11-7 ongoing monitoring
  • - DORA Chapter VI
  • - Basel supervisory expectations