Stage 4 of 6
Produce
Implement controls, build compliance processes, and produce the operational artifacts needed for day-to-day operations. Turn designs into working processes.
Strategic Objective
Execute workflow redesign, validate deployment readiness through quality gates, and activate monitoring, training, and control systems for production AI operations.
Operational Objective
Implement redesigned workflows with embedded AI capabilities, configure telemetry and monitoring, complete training and adoption activities, and activate all specified controls.
-
Inputs
- from model: Validated Model Designs
- from model: Data Contracts
- from model: Evaluation Criteria
- Engineering Coding Standards
- MLOps Platform
- Deployment Runbooks
-
Activities (15)
- Controls library implementation
- Compliance framework alignment
- Policy library deployment
- Workflow builder configuration
- Evidence collection process setup
- Stakeholder validation of artifacts
- Bias testing execution
- Red teaming execution
- Monitoring infrastructure build
- Training delivery and certification
- MLOps pipeline integration
- Agent deployment gates and kill-switch configuration
- Agent monitoring infrastructure setup
- Vendor onboarding gate execution and AI-BOM validation
- Supply chain monitoring deployment
-
Quality Gate — Gate P
- Controls implemented
- Evidence collection active
- Policies published
- All applicable regulatory requirements identified
- EU AI Act risk classification confirmed
- Applicable US state requirements documented
- Regulatory compliance evidence collection initiated
-
Outputs (10)
- Operational control library
- Framework compliance dashboards
- Published and attested policies
- Automated transformation workflows
- Evidence repository with mapping
- Audit evidence pack
- Monitoring dashboard suite
- Workflow Configuration Documentation
- Agent deployment gate records and production readiness checklists
- Vendor onboarding records and validated AI-BOMs
-
Handoffs
- → Evaluate: Deployed models
- → Evaluate: Monitoring instrumentation
- → Evaluate: Operational controls and evidence
Inputs
External inputs (3)
-
Engineering Coding Standards
The organization's standards for code quality, security review, and source control. Produce embeds these into MLOps pipelines so AI controls inherit existing engineering rigor.
OWASP SAMMNIST SSDFInternal engineering standards -
MLOps Platform
The deployed platform for model training, deployment, and monitoring. Produce configures controls, evidence capture, and kill-switches against this concrete platform.
Google MLOps Maturity ModelCD Foundation MLOps SIG -
Deployment Runbooks
The standard operating procedures for promoting workloads to production. Produce uses runbooks to define agent deployment gates, kill-switch procedures, and incident playbooks.
Google SRE WorkbookITIL 4 Service Transition
Handoff inputs from prior stages (3)
-
Validated Model Designs
from ModelThe approved model architectures, data contracts, and risk rubrics from Model. Produce uses these as the build specification for controls, MLOps pipelines, and evidence collection.
COMPEL Stage — Model -
Data Contracts
from ModelThe data interface and quality contracts defined in Model. Produce uses contracts to wire automated quality gates and lineage capture into the MLOps pipeline.
COMPEL Stage — Model -
Evaluation Criteria
from ModelThe acceptance, fairness, and performance thresholds defined in Model. Produce builds bias testing, red teaming, and gate checks against these criteria so deployment decisions are objective.
COMPEL Stage — Model
Activities
- → Controls library implementation
- → Compliance framework alignment
- → Policy library deployment
- → Workflow builder configuration
- → Evidence collection process setup
- → Stakeholder validation of artifacts
- → Bias testing execution
- → Red teaming execution
- → Monitoring infrastructure build
- → Training delivery and certification
- → MLOps pipeline integration
- → Agent deployment gates and kill-switch configuration
- → Agent monitoring infrastructure setup
- → Vendor onboarding gate execution and AI-BOM validation
- → Supply chain monitoring deployment
Outputs & Deliverables
- ✓ Operational control library
- ✓ Framework compliance dashboards
- ✓ Published and attested policies
- ✓ Automated transformation workflows
- ✓ Evidence repository with mapping
- ✓ Audit evidence pack
- ✓ Monitoring dashboard suite
- ✓ Workflow Configuration Documentation
- ✓ Agent deployment gate records and production readiness checklists
- ✓ Vendor onboarding records and validated AI-BOMs
Key Questions
- ? Are our controls effectively mitigating identified risks?
- ? How do we track compliance across frameworks?
- ? What evidence do we need for audit readiness?
- ? Are all AI systems mapped to their controlling policies?
- ? Are all agents tested, monitored, and kill-switch verified before production?
- ? Are all vendor AI components onboarded with validated AI-BOMs?
Gate / Exit Criteria
- ⚠ All workflows redesigned and implemented per specifications
- ⚠ Deployment readiness gate passed
- ⚠ Telemetry and monitoring fully configured and tested
- ⚠ Training completed for all impacted user groups
- ⚠ All specified controls activated and verified
- ⚠ Evidence collection processes operational
- ⚠ Gate P review passed
Related Articles (172)
Articles from the Body of Knowledge that are tagged to the Produce stage or are lifecycle-wide and apply here.
- M1.1The AI Transformation Imperative
- M1.1Defining AI Transformation vs. AI Adoption
- M1.1The Enterprise AI Maturity Spectrum
- M1.1Introduction to the COMPEL Framework
- M1.1The Four Pillars of AI Transformation
- M1.1AI Transformation Anti-Patterns
- M1.1The Business Value Chain of AI Transformation
- M1.1Stakeholder Landscape in AI Transformation
- M1.1AI Transformation and Organizational Culture
- M1.1Ethical Foundations of Enterprise AI
- M1.2Produce: Executing the Transformation
- M1.2Stage Gate Decision Framework
- M1.2The COMPEL Cycle: Iteration and Continuous Improvement
- M1.2Mapping COMPEL to Your Organization
- M1.2Integration with Existing Frameworks
- M1.2Evaluating Agentic AI: Goal Achievement and Behavioral Assessment
- M1.2Agent Learning, Memory, and Adaptation: Governance Implications
- M1.2Transformation Enablers
- M1.2Mandatory Artifacts and Evidence Management Across the COMPEL Cycle
- M1.2The COMPEL Operating Model: Roles, RACI, and Decision Rights
- M1.2Entry and Exit Criteria: Stage Gate Readiness Across the COMPEL Cycle
- M1.2Creating the AI Operating Model Blueprint
- M1.2Producing the Readiness Assessment Report
- M1.2Building the Control Requirements Matrix
Related Knowledge Domains
- AI Strategy & Vision51 articles
- Transformation Design & Program Architecture44 articles
- Organizational Change & Culture40 articles
- AI Governance & Compliance38 articles
- Enterprise Operating Model & Portfolio Leadership30 articles
- Execution & Delivery Excellence26 articles
- Technology Architecture & Infrastructure24 articles
- Talent & Capability Development20 articles