Skip to main content
M

Stage 3 of 6

Model

Design and develop your transformation frameworks, policies, and AI system registry. Create the models and blueprints that define how AI should be managed responsibly in your specific context.

Strategic Objective

Classify AI models and systems by risk, define human validation rules and explainability requirements, and establish the control framework that governs AI behavior.

Operational Objective

Produce validated system classifications, explainability specifications, control requirement documents, and agent autonomy classifications for every registered AI system.

Model — Stage Flow
  1. Inputs

    • from organize: Target Operating Model
    • from organize: Governance Structure and Committee Charters
    • from organize: Capability Roadmap
    • Data Estate Inventory
    • Model Registry Standards
    • ML Platform Reference Architecture
  2. Activities (19)

    • AI System Registry design and population
    • Policy framework development
    • Risk assessment framework creation
    • Vendor and third-party AI evaluation
    • Decision flow documentation
    • Bias testing framework design
    • Red teaming protocol design
    • Human-AI collaboration modeling
    • Incident response procedure design
    • Foundation model selection governance
    • Provider vs deployer obligation mapping
    • Model card requirements and verification
    • Fine-tuning governance policies
    • Training data governance requirements
    • Model lifecycle management (versioning, deprecation, replacement)
    • Agent interaction policy and trust boundary design
    • A2A governance rules and autonomy tier assignments
    • Vendor risk scoring model and AI-BOM template design
    • Model provenance requirements definition
  3. Quality Gate — Gate M

    • Design documents approved
    • Risk framework defined
    • AI system registry populated
  4. Outputs (15)

    • Comprehensive AI system inventory
    • Responsible AI policy library
    • Risk assessment templates and rubrics
    • Vendor risk assessment criteria
    • Decision documentation standards
    • Human-AI Collaboration Blueprints
    • Data Readiness Reports per AI system
    • Decision Log Templates
    • Foundation Model Selection Criteria scorecard
    • Provider-Deployer obligation mapping
    • Model Card templates (EU AI Act Article 53 compliant)
    • Fine-Tuning Governance Policy
    • Model Lifecycle Management Plan
    • Agent interaction policies and trust boundary specifications
    • Vendor risk scoring model and AI-BOM template specification
  5. Handoffs

    • Produce: Validated model designs
    • Produce: Data contracts
    • Produce: Evaluation criteria
    • Produce: Policy framework and risk rubrics
    • Produce: AI system registry

Inputs

External inputs (3)

  • Data Estate Inventory

    A current inventory of data sources, lineage, and classification. Model uses this to define data contracts, training data governance, and AI-BOM provenance requirements.

    DAMA-DMBOKDCAMGDPR Article 30 Records of Processing
  • Model Registry Standards

    The organization's standards for cataloging, versioning, and documenting machine learning models. Model uses these to design AI System Registry and Model Card requirements that integrate with existing MLOps tooling.

    MLflow Model RegistryGoogle Model CardsEU AI Act Article 53
  • ML Platform Reference Architecture

    The organization's reference architecture for machine learning and generative AI platforms. Model uses this to ground policy controls and risk rubrics in the platform reality engineers actually deploy on.

    TOGAF Phase C (Application & Data Architecture)AWS/GCP/Azure ML reference architectures

Handoff inputs from prior stages (3)

  • Target Operating Model

    from Organize

    The CoE structure, federation strategy, and operating model defined in Organize. Model uses this to assign policy ownership, scope the AI system registry, and align frameworks to who will run them.

    COMPEL Stage — Organize
  • Governance Structure and Committee Charters

    from Organize

    The committees, escalation paths, and decision rights stood up in Organize. Model uses this so policies and risk frameworks have a real approving body and a defined path for exceptions.

    COMPEL Stage — Organize
  • Capability Roadmap

    from Organize

    The phased build-out of AI capabilities planned during Organize. Model uses the roadmap to sequence policy creation, registry population, and risk framework rollouts so they land just-in-time.

    COMPEL Stage — Organize

Activities

  • AI System Registry design and population
  • Policy framework development
  • Risk assessment framework creation
  • Vendor and third-party AI evaluation
  • Decision flow documentation
  • Bias testing framework design
  • Red teaming protocol design
  • Human-AI collaboration modeling
  • Incident response procedure design
  • Foundation model selection governance
  • Provider vs deployer obligation mapping
  • Model card requirements and verification
  • Fine-tuning governance policies
  • Training data governance requirements
  • Model lifecycle management (versioning, deprecation, replacement)
  • Agent interaction policy and trust boundary design
  • A2A governance rules and autonomy tier assignments
  • Vendor risk scoring model and AI-BOM template design
  • Model provenance requirements definition

Outputs & Deliverables

  • Comprehensive AI system inventory
  • Responsible AI policy library
  • Risk assessment templates and rubrics
  • Vendor risk assessment criteria
  • Decision documentation standards
  • Human-AI Collaboration Blueprints
  • Data Readiness Reports per AI system
  • Decision Log Templates
  • Foundation Model Selection Criteria scorecard
  • Provider-Deployer obligation mapping
  • Model Card templates (EU AI Act Article 53 compliant)
  • Fine-Tuning Governance Policy
  • Model Lifecycle Management Plan
  • Agent interaction policies and trust boundary specifications
  • Vendor risk scoring model and AI-BOM template specification

Key Questions

  • ? What policies do we need for responsible AI?
  • ? How should we classify and register AI systems?
  • ? What risk frameworks apply to our context?
  • ? How do we handle third-party AI in our governance model?
  • ? How do we govern foundation model selection and fine-tuning?
  • ? What trust boundaries and A2A governance rules apply to our agentic systems?
  • ? What provenance and AI-BOM standards do we require for supply chain components?

Gate / Exit Criteria

  • All registered AI systems classified by risk tier
  • Human validation rules defined for high-risk systems
  • Explainability requirements documented per system and audience
  • Control requirements matrix complete with evidence specifications
  • Agent autonomy levels classified for all autonomous systems
  • Gate M review passed with design documents approved

Articles from the Body of Knowledge that are tagged to the Model stage or are lifecycle-wide and apply here.

See all 172 related articles →

Cross-Cutting Concerns