Skip to main content
P

Stage 4 of 6

Produce

Implement controls, build compliance processes, and produce the operational artifacts needed for day-to-day operations. Turn designs into working processes.

Strategic Objective

Execute workflow redesign, validate deployment readiness through quality gates, and activate monitoring, training, and control systems for production AI operations.

Operational Objective

Implement redesigned workflows with embedded AI capabilities, configure telemetry and monitoring, complete training and adoption activities, and activate all specified controls.

Produce — Stage Flow
  1. Inputs

    • from model: Validated Model Designs
    • from model: Data Contracts
    • from model: Evaluation Criteria
    • Engineering Coding Standards
    • MLOps Platform
    • Deployment Runbooks
  2. Activities (15)

    • Controls library implementation
    • Compliance framework alignment
    • Policy library deployment
    • Workflow builder configuration
    • Evidence collection process setup
    • Stakeholder validation of artifacts
    • Bias testing execution
    • Red teaming execution
    • Monitoring infrastructure build
    • Training delivery and certification
    • MLOps pipeline integration
    • Agent deployment gates and kill-switch configuration
    • Agent monitoring infrastructure setup
    • Vendor onboarding gate execution and AI-BOM validation
    • Supply chain monitoring deployment
  3. Quality Gate — Gate P

    • Controls implemented
    • Evidence collection active
    • Policies published
    • All applicable regulatory requirements identified
    • EU AI Act risk classification confirmed
    • Applicable US state requirements documented
    • Regulatory compliance evidence collection initiated
  4. Outputs (10)

    • Operational control library
    • Framework compliance dashboards
    • Published and attested policies
    • Automated transformation workflows
    • Evidence repository with mapping
    • Audit evidence pack
    • Monitoring dashboard suite
    • Workflow Configuration Documentation
    • Agent deployment gate records and production readiness checklists
    • Vendor onboarding records and validated AI-BOMs
  5. Handoffs

    • Evaluate: Deployed models
    • Evaluate: Monitoring instrumentation
    • Evaluate: Operational controls and evidence

Inputs

External inputs (3)

  • Engineering Coding Standards

    The organization's standards for code quality, security review, and source control. Produce embeds these into MLOps pipelines so AI controls inherit existing engineering rigor.

    OWASP SAMMNIST SSDFInternal engineering standards
  • MLOps Platform

    The deployed platform for model training, deployment, and monitoring. Produce configures controls, evidence capture, and kill-switches against this concrete platform.

    Google MLOps Maturity ModelCD Foundation MLOps SIG
  • Deployment Runbooks

    The standard operating procedures for promoting workloads to production. Produce uses runbooks to define agent deployment gates, kill-switch procedures, and incident playbooks.

    Google SRE WorkbookITIL 4 Service Transition

Handoff inputs from prior stages (3)

  • Validated Model Designs

    from Model

    The approved model architectures, data contracts, and risk rubrics from Model. Produce uses these as the build specification for controls, MLOps pipelines, and evidence collection.

    COMPEL Stage — Model
  • Data Contracts

    from Model

    The data interface and quality contracts defined in Model. Produce uses contracts to wire automated quality gates and lineage capture into the MLOps pipeline.

    COMPEL Stage — Model
  • Evaluation Criteria

    from Model

    The acceptance, fairness, and performance thresholds defined in Model. Produce builds bias testing, red teaming, and gate checks against these criteria so deployment decisions are objective.

    COMPEL Stage — Model

Activities

  • Controls library implementation
  • Compliance framework alignment
  • Policy library deployment
  • Workflow builder configuration
  • Evidence collection process setup
  • Stakeholder validation of artifacts
  • Bias testing execution
  • Red teaming execution
  • Monitoring infrastructure build
  • Training delivery and certification
  • MLOps pipeline integration
  • Agent deployment gates and kill-switch configuration
  • Agent monitoring infrastructure setup
  • Vendor onboarding gate execution and AI-BOM validation
  • Supply chain monitoring deployment

Outputs & Deliverables

  • Operational control library
  • Framework compliance dashboards
  • Published and attested policies
  • Automated transformation workflows
  • Evidence repository with mapping
  • Audit evidence pack
  • Monitoring dashboard suite
  • Workflow Configuration Documentation
  • Agent deployment gate records and production readiness checklists
  • Vendor onboarding records and validated AI-BOMs

Key Questions

  • ? Are our controls effectively mitigating identified risks?
  • ? How do we track compliance across frameworks?
  • ? What evidence do we need for audit readiness?
  • ? Are all AI systems mapped to their controlling policies?
  • ? Are all agents tested, monitored, and kill-switch verified before production?
  • ? Are all vendor AI components onboarded with validated AI-BOMs?

Gate / Exit Criteria

  • All workflows redesigned and implemented per specifications
  • Deployment readiness gate passed
  • Telemetry and monitoring fully configured and tested
  • Training completed for all impacted user groups
  • All specified controls activated and verified
  • Evidence collection processes operational
  • Gate P review passed

Articles from the Body of Knowledge that are tagged to the Produce stage or are lifecycle-wide and apply here.

See all 172 related articles →

Cross-Cutting Concerns