Skip to main content
AITF M1.3-Art11 v1.0 Reviewed 2026-04-06 Open Access
M1.3 The 20-Domain Maturity Model
AITF · Foundations

AI Supply Chain Governance: The Missing Domain

AI Supply Chain Governance: The Missing Domain — Maturity Assessment & Diagnostics — Foundation depth — COMPEL Body of Knowledge.

12 min read Article 11 of 11 Calibrate

COMPEL Certification Body of Knowledge — Module 1.3: The 20-Domain Maturity Model Article 11 — Domain 20: AI Supply Chain and Third-Party Governance


The Enterprise AI Reality: Most AI Is Procured, Not Built

There is a persistent misconception shaping enterprise AI governance today. When organizations think about governing AI, they think about the models their data science teams build, the machine learning pipelines their engineers deploy, and the inference endpoints their platform teams manage. This is understandable. These are visible, tangible, and within the organization’s direct control.

But they represent a shrinking fraction of the AI that actually operates within the enterprise.

Consider the modern enterprise technology stack. Microsoft 365 now includes Copilot across Word, Excel, PowerPoint, Outlook, and Teams. Salesforce embeds Einstein AI across its CRM, marketing automation, and service cloud. ServiceNow uses AI for ticket classification, knowledge article suggestion, and workflow automation. SAP integrates AI into demand forecasting, invoice matching, and procurement optimization. Workday applies machine learning to talent acquisition, compensation benchmarking, and workforce planning. Slack, Zoom, Google Workspace, Adobe Creative Cloud, HubSpot, Zendesk — virtually every enterprise SaaS platform now incorporates AI capabilities.

None of these AI systems were built by the enterprise. None of them are subject to the enterprise’s model development lifecycle. None of them are tested in the enterprise’s AI testing framework. And in most organizations, none of them are covered by the enterprise’s AI governance program.

This is not a minor oversight. Research from Gartner, McKinsey, and Forrester consistently indicates that procured and embedded AI represents between 60 and 80 percent of the AI capabilities operating within a typical large enterprise. The exact percentage varies by industry and organizational maturity, but the directional finding is consistent: the majority of enterprise AI is third-party AI.

Domain 20, AI Supply Chain and Third-Party Governance, exists because governing only the AI you build while ignoring the AI you buy is not governance — it is theater.

The Shadow AI Problem in Organizations

Shadow AI is the AI equivalent of shadow IT, but with higher stakes and lower visibility. Shadow IT typically involves employees using unauthorized cloud services, personal devices, or unapproved software. Shadow AI involves employees — and increasingly, entire business units — using AI capabilities that the governance function does not know exist.

Shadow AI takes multiple forms in the enterprise:

Embedded SaaS AI. When an organization licenses Salesforce, it gets Einstein AI. When it licenses Microsoft 365, it gets Copilot. When it licenses ServiceNow, it gets AI-powered workflows. These AI capabilities arrive as part of broader platform licenses. They are often enabled by default or activated by administrators without AI governance review. The procurement decision was made based on the platform’s primary functionality, not its AI capabilities. The result is AI systems operating in production that were never assessed for bias, transparency, accuracy, or regulatory compliance.

Individual AI tool adoption. Knowledge workers across the enterprise are using ChatGPT, Claude, Gemini, Perplexity, and dozens of other AI tools for drafting documents, analyzing data, summarizing meetings, writing code, and generating presentations. Some of these tools are accessed through personal accounts. Some are accessed through team or departmental licenses that were purchased on corporate credit cards without IT or governance approval. Some are accessed through free tiers that require no purchase at all.

API and developer AI. Development teams integrate AI APIs from OpenAI, Anthropic, Google, Cohere, and other providers into internal applications. These integrations may bypass the formal vendor assessment process if they are classified as “development tools” rather than “enterprise software.” The AI capabilities are embedded within internal applications, making them invisible to governance functions that focus on standalone AI deployments.

Departmental AI procurement. Individual departments or business units purchase AI-specific tools — AI-powered analytics platforms, AI-driven recruitment tools, AI-enabled customer service bots — using departmental budgets and procurement authority. These purchases may fall below the threshold that triggers enterprise procurement review. They create pockets of AI usage that are governed, if at all, by departmental standards that may or may not align with enterprise AI governance requirements.

The cumulative effect is that the typical large enterprise has significantly more AI systems operating than its governance function is aware of. The governed AI — the models built internally and formally deployed — represents a small fraction of total AI exposure. The ungoverned AI — the procured, embedded, and individually adopted AI — represents the majority of risk.

Introduction to AI Supply Chain Governance Concepts

AI supply chain governance is the systematic practice of identifying, assessing, monitoring, and managing the risks associated with AI systems that the enterprise obtains from third parties. It applies the principles of supply chain risk management — well established in physical supply chains and increasingly mature in software supply chains — to the specific challenges of AI.

The concept builds on three established governance disciplines:

Third-party risk management (TPRM). Enterprise TPRM programs assess and monitor the risks created by vendors, suppliers, and service providers. AI supply chain governance extends TPRM to address the unique risks that AI creates, including model bias, training data provenance, algorithmic opacity, and automated decision-making without human oversight.

Software supply chain security. The software industry has developed frameworks for managing software supply chain risk, including Software Bills of Materials (SBOMs), dependency scanning, vulnerability management, and supply chain attestation. AI supply chain governance extends these concepts to address the unique components of AI systems, including training data, model architectures, fine-tuning processes, and inference pipelines.

Vendor governance. Traditional vendor governance focuses on contractual compliance, service level agreements, financial stability, and operational risk. AI supply chain governance extends vendor governance to include AI-specific requirements such as model performance commitments, bias testing obligations, transparency requirements, and incident notification procedures.

The distinctive challenges of AI supply chain governance include:

Opacity. Many AI vendors treat their models as proprietary intellectual property. They may not disclose the training data used, the model architecture employed, the testing performed, or the known limitations identified. This opacity makes traditional vendor assessment approaches insufficient — you cannot assess what you cannot see.

Dynamism. AI models are updated frequently. A vendor may retrain a model, adjust its parameters, or modify its behavior without notice. The AI system you assessed during procurement may not be the AI system operating today. This creates a need for continuous monitoring that goes beyond traditional periodic vendor reviews.

Cascading risk. AI supply chains are often multi-tiered. Your SaaS vendor may itself use AI from a foundation model provider, who trained on data from multiple sources, using compute infrastructure from a cloud provider. A bias in a foundation model cascades through every application built on it. A security vulnerability in the AI infrastructure affects every model deployed on it.

Shared responsibility ambiguity. When an AI system produces a biased outcome, who is responsible? The organization that deployed it? The vendor that provided it? The foundation model provider whose model it is built on? The data provider whose training data contained the bias? AI supply chains create shared responsibility challenges that traditional vendor contracts are not designed to address.

What an AI Bill of Materials Is and Why It Matters

An AI Bill of Materials (AI-BOM) is a structured, machine-readable document that describes the components, dependencies, and provenance of an AI system. It is the AI equivalent of a Software Bill of Materials (SBOM), extended to capture the unique components of AI systems.

A comprehensive AI-BOM typically includes:

Model information. The model architecture (transformer, convolutional neural network, gradient-boosted trees, etc.), the model version, the framework used (PyTorch, TensorFlow, JAX, etc.), the model size (parameters, layers, embedding dimensions), and the intended use cases.

Training data description. The datasets used for training and fine-tuning, including their sources, sizes, temporal coverage, geographic coverage, demographic composition, known biases, and licensing terms. This does not require sharing the data itself — it requires describing what the data is and where it came from.

Evaluation results. The benchmarks used to evaluate the model, the metrics measured (accuracy, precision, recall, F1, fairness metrics, robustness metrics), the results achieved, and any known failure modes or limitations.

Dependencies. The software dependencies (libraries, frameworks, runtime environments), hardware dependencies (GPU requirements, memory requirements), and service dependencies (API endpoints, authentication services, data feeds) that the AI system requires.

Provenance chain. The organizations involved in creating the AI system, the roles they played (data provider, model trainer, fine-tuner, deployer), and the attestations they provide about their practices.

The AI-BOM concept is gaining regulatory and standards traction. The EU AI Act requires providers of high-risk AI systems to document training data, model architecture, and evaluation results. NIST’s AI Risk Management Framework (AI RMF), particularly the MAP function’s MAP 5.1 and MAP 5.2 subcategories, calls for documenting AI system components and dependencies. ISO/IEC 42001:2023 (Annex A, Control A.10) addresses supplier relationships and requires organizations to establish policies for AI obtained from external sources.

For enterprise governance, the AI-BOM serves three critical functions. First, it enables informed procurement decisions by providing the information needed to assess AI risk before deployment. Second, it supports ongoing governance by documenting what is deployed and enabling change detection when vendors update their models. Third, it provides regulatory evidence by demonstrating that the organization understands the AI systems it operates, regardless of who built them.

How Domain 20 Fits into the COMPEL Maturity Model

Domain 20 sits within the Governance pillar of the COMPEL maturity model, alongside the other governance domains: AI Strategy and Alignment (D14), AI Ethics and Responsible AI (D15), Regulatory Compliance (D16), AI Risk Management (D17), and AI Governance Structure (D18). Its placement in the Governance pillar reflects the fact that third-party AI governance is fundamentally a governance discipline — it requires policies, processes, accountability structures, and oversight mechanisms, not just technical controls.

Domain 20 also has strong connections to domains in other pillars:

Technology pillar. Domain 11 (AI Infrastructure and Platform) must account for third-party AI platforms and APIs. Domain 12 (AI Security) must address supply chain attack vectors. Domain 13 (AI Integration) must manage the integration of procured AI into enterprise systems.

Process pillar. Domain 7 (AI Use Case Management) must include procured AI in its use case inventory. Domain 8 (Data Governance and Management) must address the data shared with and received from AI vendors. Domain 10 (Continuous Improvement) must incorporate third-party AI performance into its improvement cycles.

People pillar. Domain 3 (AI Literacy and Training) must ensure that employees understand the AI they use, including procured AI. Domain 4 (Change Management and Adoption) must manage the organizational impact of third-party AI adoption.

The maturity levels for Domain 20 follow the COMPEL five-level model:

Level 1 — Foundational. The organization has no formal awareness of its third-party AI exposure. AI procurement decisions do not include AI-specific risk assessment. There is no inventory of procured AI systems. Shadow AI is unaddressed.

Level 2 — Developing. The organization has begun to inventory its procured AI systems. Basic vendor questionnaires include some AI-specific questions. AI governance policies acknowledge the existence of third-party AI but do not provide comprehensive coverage. Shadow AI has been identified as a concern but not systematically addressed.

Level 3 — Defined. A formal third-party AI governance framework exists, including policies, assessment procedures, and contractual requirements. The organization maintains a comprehensive inventory of procured AI. Shadow AI discovery processes are in place. Vendor assessments include AI-specific criteria covering bias, transparency, security, and compliance.

Level 4 — Managed. Third-party AI governance is integrated into enterprise risk management. Continuous monitoring of AI vendor performance is operational. AI-BOMs are required for critical AI procurements. Supply chain risk is quantified and reported to leadership. Multi-tier supply chain visibility extends to understanding your vendor’s AI vendors.

Level 5 — Optimizing. The organization leads in third-party AI governance practices. Predictive supply chain risk management identifies emerging risks before they materialize. The organization actively shapes industry standards for AI supply chain governance. Collaborative governance relationships with strategic AI vendors drive mutual improvement. The organization’s third-party AI governance practices are benchmarked and referenced by peers.

The Governance Imperative

AI supply chain governance is not optional. Regulatory frameworks increasingly require it. The EU AI Act’s Article 9(4) explicitly addresses supply chain obligations, requiring deployers of high-risk AI systems to exercise due diligence over the AI systems they obtain from providers. NIST AI RMF’s MAP 5 function calls for identifying and documenting AI system dependencies, including third-party components. ISO/IEC 42001:2023’s Annex A, Control A.10 (Supplier Relationships) requires organizations to establish and maintain policies for AI products and services obtained from external suppliers.

Beyond regulatory compliance, the business case is straightforward: an AI governance program that governs only the AI you build while ignoring the AI you buy has a coverage gap that grows wider every year as AI becomes more deeply embedded in enterprise software. Domain 20 closes that gap.

The articles that follow in this domain series progressively build the knowledge and skills needed to implement effective AI supply chain governance — from the foundational awareness established here, through practitioner-level methodology and assessment techniques, to governance professional-level enterprise architecture, and leader-level strategic oversight. Each level builds on the previous, ensuring that the organization’s AI supply chain governance matures in parallel with its broader AI governance capabilities.


Next in the Domain 20 series: Article 13 — Third-Party AI: The Governance Challenge You Are Not Seeing (Module 1.4)