Stage 6 of 6
Learn
Extract insights from transformation operations, measure ROI, and drive continuous improvement. Use data and experience to evolve your responsible AI program over time. Learn feeds back into Calibrate, forming a continuous improvement loop.
Strategic Objective
Extract actionable insights from evaluation findings to update policies, capture reusable patterns, update benchmarks, and make informed scaling, retirement, or redesign decisions.
Operational Objective
Produce updated policy documents, pattern library entries, updated benchmark targets, scaling recommendations, and retirement/redesign decisions for the next COMPEL cycle.
-
Inputs
- from evaluate: Evaluation Reports
- from evaluate: Incident Logs
- from evaluate: Drift Findings
- from evaluate: Audit Findings and Gate Decisions
- Retrospective Cadence
- Learning Loop Forum
- Knowledge Management Practices
-
Activities (9)
- Metrics dashboard monitoring
- Incident management and post-incident review
- Augmentation ROI measurement
- Continuous improvement cycles
- Change detection and response
- Knowledge base curation
- ROI measurement and reporting
- Calibrate cycle feed preparation
- Knowledge management updates
-
Quality Gate — Gate L
- Metrics analyzed
- Improvement plan created
- Knowledge base updated
-
Outputs (9)
- KPI/KRI trend reports
- Incident reports with lessons learned
- ROI analysis and value reports
- Improvement initiative tracker
- Drift and change detection alerts
- Model retirement lessons captured
- AI Performance Dashboard
- Continuous Improvement Register
- Next-Cycle Calibrate Inputs
-
Handoffs
- → Calibrate: Improvement recommendations and updated baselines
- → Calibrate: Next-cycle Calibrate inputs
Inputs
External inputs (3)
-
Retrospective Cadence
The organization's standard rhythm and format for retrospectives and post-incident reviews. Learn uses this cadence so AI-specific learning loops integrate with existing agile and operations practices.
Scrum Guide (Sprint Retrospective)Google SRE Postmortem Culture -
Learning Loop Forum
A standing cross-functional forum where AI lessons learned are shared and acted on. Learn uses this forum to socialize improvement recommendations and to keep the COMPEL cycle visibly continuous rather than annual.
SAFe Inspect & AdaptCommunities of Practice -
Knowledge Management Practices
The organization's standards for capturing, indexing, and reusing institutional knowledge. Learn uses these so AI lessons land in systems people already consult, not in orphaned documents.
ISO 30401 (Knowledge Management)PMBOK 7 Lessons Learned Repository
Handoff inputs from prior stages (4)
-
Evaluation Reports
from EvaluateThe gate review decisions, conformity assessments, and audit findings produced in Evaluate. Learn uses these to extract patterns and feed measurable improvements into the next Calibrate cycle.
COMPEL Stage — Evaluate -
Incident Logs
from EvaluateThe catalog of AI incidents, near-misses, and operational events captured during Evaluate. Learn analyzes these to find systemic root causes rather than blame individual operators.
COMPEL Stage — Evaluate -
Drift Findings
from EvaluateData drift, concept drift, and behavior drift signals raised during Evaluate. Learn uses drift evidence to drive retraining decisions, model retirement, and updated risk thresholds.
COMPEL Stage — Evaluate -
Audit Findings and Gate Decisions
from EvaluateThe remediation backlog generated by audits and gate reviews during Evaluate. Learn uses these to prioritize continuous improvement initiatives and update knowledge base content.
COMPEL Stage — Evaluate
Activities
- → Metrics dashboard monitoring
- → Incident management and post-incident review
- → Augmentation ROI measurement
- → Continuous improvement cycles
- → Change detection and response
- → Knowledge base curation
- → ROI measurement and reporting
- → Calibrate cycle feed preparation
- → Knowledge management updates
Outputs & Deliverables
- ✓ KPI/KRI trend reports
- ✓ Incident reports with lessons learned
- ✓ ROI analysis and value reports
- ✓ Improvement initiative tracker
- ✓ Drift and change detection alerts
- ✓ Model retirement lessons captured
- ✓ AI Performance Dashboard
- ✓ Continuous Improvement Register
- ✓ Next-Cycle Calibrate Inputs
Key Questions
- ? What is the ROI of our responsible AI investment?
- ? What patterns emerge from incidents?
- ? How can we improve transformation effectiveness?
- ? Are our AI risk indicators trending in the right direction?
Gate / Exit Criteria
- ⚠ Policy updates drafted and queued for approval
- ⚠ Reusable patterns captured in pattern library
- ⚠ Benchmark targets updated based on evaluation data
- ⚠ Scaling decisions documented with business case
- ⚠ Retirement or redesign decisions recorded for underperforming systems
- ⚠ Continuous improvement backlog updated and prioritized
- ⚠ Gate L review passed — cycle handoff to next Calibrate
Related Articles (150)
Articles from the Body of Knowledge that are tagged to the Learn stage or are lifecycle-wide and apply here.
- M1.1The AI Transformation Imperative
- M1.1Defining AI Transformation vs. AI Adoption
- M1.1The Enterprise AI Maturity Spectrum
- M1.1Introduction to the COMPEL Framework
- M1.1The Four Pillars of AI Transformation
- M1.1AI Transformation Anti-Patterns
- M1.1The Business Value Chain of AI Transformation
- M1.1Stakeholder Landscape in AI Transformation
- M1.1AI Transformation and Organizational Culture
- M1.1Ethical Foundations of Enterprise AI
- M1.2Learn: Capturing and Applying Knowledge
- M1.2Stage Gate Decision Framework
- M1.2The COMPEL Cycle: Iteration and Continuous Improvement
- M1.2Mapping COMPEL to Your Organization
- M1.2Integration with Existing Frameworks
- M1.2Evaluating Agentic AI: Goal Achievement and Behavioral Assessment
- M1.2Agent Learning, Memory, and Adaptation: Governance Implications
- M1.2Transformation Enablers
- M1.2Mandatory Artifacts and Evidence Management Across the COMPEL Cycle
- M1.2The COMPEL Operating Model: Roles, RACI, and Decision Rights
- M1.2Entry and Exit Criteria: Stage Gate Readiness Across the COMPEL Cycle
- M1.2Creating the AI Operating Model Blueprint
- M1.2Producing the Readiness Assessment Report
- M1.2Building the Control Requirements Matrix
Related Knowledge Domains
- AI Strategy & Vision61 articles
- Transformation Design & Program Architecture44 articles
- AI Governance & Compliance38 articles
- Framework Interoperability & Standards32 articles
- Talent & Capability Development21 articles
- Organizational Change & Culture20 articles
- Enterprise Operating Model & Portfolio Leadership20 articles
- Execution & Delivery Excellence13 articles