COMPEL Certification Body of Knowledge — Module 1.4: AI Technology Landscape Article 13 — Domain 20: AI Supply Chain and Third-Party Governance
The Invisible AI Layer in Enterprise Software
Every major enterprise software platform shipped in 2025 or later contains AI. This is no longer a prediction or a trend — it is the baseline reality of enterprise technology.
Microsoft 365 Copilot is integrated across the entire productivity suite. It drafts emails in Outlook, generates slides in PowerPoint, analyzes data in Excel, summarizes documents in Word, and captures action items in Teams meetings. It is powered by large language models that process organizational data — emails, documents, chats, calendar entries — to generate contextually relevant outputs. Every organization with a Microsoft 365 E3 or E5 license has the option to enable Copilot. Many have done so. Few have subjected it to the same governance rigor they would apply to an internally developed AI system.
Salesforce Einstein is embedded across Sales Cloud, Service Cloud, Marketing Cloud, and Commerce Cloud. It predicts lead conversion probability, recommends next-best actions for sales representatives, classifies and routes customer service cases, optimizes email send times, and personalizes product recommendations. Einstein GPT adds generative AI capabilities: drafting customer emails, generating knowledge articles, and creating personalized marketing content. Organizations that licensed Salesforce for CRM now have AI making decisions about which leads to prioritize, which customers to contact, and what messages to send.
ServiceNow integrates AI into IT Service Management (ITSM), HR Service Delivery, Customer Service Management, and Security Operations. AI classifies and routes tickets, suggests resolution steps, predicts service outages, and automates routine workflows. The AI-powered Virtual Agent handles employee and customer inquiries, making decisions about how to interpret questions and what responses to provide.
SAP embeds AI across its ERP, supply chain, procurement, and finance modules. AI optimizes demand forecasting, automates invoice matching, detects anomalies in financial transactions, and recommends procurement decisions. These AI capabilities operate on some of the most sensitive data in the enterprise — financial records, supply chain data, procurement transactions, and employee information.
This pattern extends to virtually every category of enterprise software. Workday uses AI for talent acquisition and workforce planning. Zoom uses AI for meeting summarization and noise cancellation. Slack uses AI for channel summarization and workflow automation. Adobe uses AI for content generation and creative optimization. HubSpot uses AI for lead scoring and content recommendation.
The cumulative result is that the typical large enterprise has dozens — and in some cases hundreds — of AI systems operating within its technology stack that were never individually procured, never individually assessed, and are not individually governed. They arrived as features within platform licenses, and they operate with whatever governance (or lack thereof) the vendor applies.
Why Third-Party AI Creates Ungoverned Risk
Third-party AI introduces categories of risk that traditional vendor risk management is not equipped to address. Understanding these risk categories is the first step toward governing them.
Decision-Making Risk
Many third-party AI systems make or influence decisions that have material consequences for the organization and the people it serves. A recruitment AI that screens resumes is making decisions about people’s livelihoods. A credit scoring AI that evaluates loan applications is making decisions about financial access. A customer service AI that routes complaints is making decisions about service quality. A marketing AI that personalizes content is making decisions about what information people see.
These decisions are subject to bias. Every AI model reflects the biases present in its training data and encoded in its design choices. When the organization builds its own AI, it can (and should) test for bias, adjust for bias, and monitor for bias. When the organization uses a vendor’s AI, it is dependent on the vendor’s bias testing — which may or may not be adequate, may or may not cover the demographics relevant to the organization’s context, and may or may not be conducted with the rigor that the organization’s risk tolerance requires.
The EU AI Act makes this concrete. Article 26 places obligations on deployers of high-risk AI systems — not just providers. If an organization uses a vendor’s AI system in a high-risk context (such as employment, credit, or education), the organization has regulatory obligations regardless of whether it built the AI or bought it. These obligations include conducting fundamental rights impact assessments, ensuring human oversight, and monitoring for discriminatory impacts.
Data Risk
Third-party AI systems consume organizational data. Copilot reads emails and documents. Einstein processes CRM data. ServiceNow AI analyzes ticket content. This data consumption raises several risk categories.
Data leakage. What happens to organizational data after it is processed by the vendor’s AI? Is it used to train models? Is it retained? Can it be reconstructed? Is it shared with third parties? The answers vary by vendor, by product, by configuration option, and by contract terms — and they may change when the vendor updates its terms of service.
Data residency. Where is organizational data processed? Many AI models run on cloud infrastructure that may span multiple geographic regions. For organizations subject to data residency requirements — particularly those operating under GDPR, China’s PIPL, or sector-specific regulations — the location of AI processing matters. The vendor’s AI infrastructure may not align with the organization’s data residency requirements.
Data minimization. AI systems often consume more data than strictly necessary for their intended function. A meeting summarization AI processes the entire meeting transcript, including off-topic discussions, confidential asides, and sensitive information. A document drafting AI may access documents beyond those immediately relevant. This broad data consumption may conflict with data minimization principles embedded in regulations like GDPR.
Transparency Risk
AI systems are often opaque. Foundation models are particularly opaque — their architectures, training data, and decision-making processes are typically proprietary. When an organization uses a vendor’s AI, it inherits this opacity.
This opacity creates governance challenges. The organization cannot fully explain how the AI reaches its outputs. It cannot independently verify that the AI is performing as claimed. It cannot identify the root cause of errors or biased outputs. It cannot assess whether the AI’s behavior has changed after a vendor update.
Transparency obligations compound this challenge. The EU AI Act requires deployers of high-risk AI systems to provide meaningful information to affected individuals about how AI systems influence decisions about them. If the organization cannot explain how its vendor’s AI works, it cannot meet this transparency obligation.
Concentration Risk
Enterprise AI supply chains are highly concentrated. A small number of foundation model providers — OpenAI, Anthropic, Google, Meta, Mistral — underpin a large and growing number of AI applications. When Salesforce Einstein, Microsoft Copilot, and a dozen other enterprise AI tools all ultimately depend on models from the same two or three providers, the organization faces concentration risk.
A security breach at a foundation model provider could affect every application built on its models. A model degradation at a foundation model provider could impair multiple enterprise functions simultaneously. A terms-of-service change at a foundation model provider could create compliance issues across the entire portfolio of applications that depend on it.
Velocity Risk
AI vendors update their models frequently. These updates may change the model’s behavior in ways that affect the organization. A model update that improves average accuracy may degrade accuracy for specific demographic groups. A model update that changes the model’s tone may conflict with the organization’s brand guidelines. A model update that modifies the model’s content policies may affect the organization’s use case.
Traditional vendor governance operates on periodic review cycles — annual assessments, quarterly business reviews, semi-annual audits. AI model updates happen on a much faster cadence. By the time the organization conducts its next periodic vendor review, the vendor’s AI may have been updated multiple times, each update potentially changing the risk profile.
The Procured AI Governance Gap
The gap between the governance applied to internally built AI and the governance applied to procured AI is significant in most organizations. This gap exists for several identifiable reasons:
Organizational structure. AI governance teams are typically aligned with data science and engineering functions. They govern what these functions produce. Procured AI falls outside their organizational scope. The procurement function handles vendor relationships, but lacks AI-specific expertise. The result is a gap between two organizational functions, with procured AI falling into the space between them.
Governance framework design. Most AI governance frameworks were designed around the model development lifecycle: data collection, model training, model validation, model deployment, model monitoring. Procured AI does not follow this lifecycle. It arrives fully formed, and the organization’s governance framework has no intake process for it.
Visibility. Internally built AI is visible. It exists in code repositories, model registries, and deployment pipelines. Procured AI is often invisible. It exists as features within SaaS platforms, activated by configuration settings rather than deployment decisions. Governance teams cannot govern what they cannot see.
Contractual limitations. Organizations have limited ability to compel AI vendors to provide the transparency, testing, and assurance that comprehensive governance requires. Large enterprise software vendors set their terms, and individual customers often lack the bargaining power to negotiate AI-specific contractual provisions.
Skills gap. Assessing third-party AI requires a combination of technical AI knowledge, procurement expertise, legal acumen, and risk management skills. This combination is rare. Most organizations do not have individuals or teams with the cross-functional skills needed to effectively assess and govern procured AI.
First Steps for Shadow AI Awareness
Organizations at the beginning of their AI supply chain governance journey can take immediate, practical steps to build awareness of their third-party AI exposure.
Step 1: Conduct an Enterprise SaaS AI Audit
Review every SaaS platform the organization licenses. For each platform, identify the AI capabilities it includes — whether enabled or available for enablement. Start with the major platforms: Microsoft 365, Salesforce, ServiceNow, SAP, Workday, and any industry-specific platforms. Document which AI features are enabled, when they were enabled, who authorized their enablement, and what organizational data they access.
This audit will almost certainly reveal AI capabilities that no one in the governance function knew about. This discovery is the point of the exercise.
Step 2: Map AI to Risk Categories
For each identified AI capability, map it to a risk category. Which ones make or influence decisions about people? Which ones process personal data? Which ones operate in regulated contexts? Which ones are customer-facing? This mapping does not require deep technical analysis — it requires understanding what the AI does and who it affects.
The EU AI Act’s risk classification framework (minimal, limited, high, and unacceptable risk) provides a useful starting taxonomy. NIST AI RMF’s risk identification process (MAP function) provides an alternative approach. Either framework can be used to prioritize which third-party AI systems require immediate governance attention.
Step 3: Establish an AI Inventory
Create a central inventory of all AI systems operating within the enterprise — both internally built and externally procured. For each system, record the vendor, the AI capability, the data it accesses, the decisions it makes or influences, the users it serves, and the date it was deployed or enabled. This inventory is the foundation of all subsequent governance activities.
The inventory does not need to be perfect on day one. It needs to exist and it needs to be maintained. Start with what you know, expand through the audit process, and establish a process for keeping the inventory current as new AI systems are procured or new AI features are enabled in existing platforms.
Step 4: Review Existing Vendor Contracts
For the highest-risk procured AI systems, review the existing vendor contracts. What do they say about AI? In many cases, the answer is nothing — the contract predates the vendor’s AI capabilities. When contracts do address AI, examine the provisions for data usage, model training, transparency, incident notification, and liability. Identify the gaps between what the contract provides and what governance requires.
This contract review will reveal that most existing vendor contracts are inadequate for governing AI. This finding is expected and provides the basis for contract renegotiation and for establishing AI-specific contractual requirements for future procurements.
Step 5: Establish a Procurement Gate
Add AI-specific questions to the vendor procurement process. Before any new SaaS platform is licensed, ask: Does this platform include AI capabilities? What data will the AI access? What decisions will the AI make or influence? Can the AI features be disabled if needed? What transparency does the vendor provide about its AI? What bias testing has been performed?
This gate does not need to be elaborate. It needs to exist. Even a basic set of AI-specific procurement questions dramatically increases the organization’s awareness of the AI entering its environment.
The Path Forward
Domain 20 of the COMPEL maturity model provides the comprehensive framework for advancing beyond these initial steps. The subsequent articles in this domain series build progressively on this foundation:
At the practitioner level (AITP, Level 2), you will learn the detailed methodologies for adapting COMPEL to procured AI governance, conducting shadow AI discovery at enterprise scale, and performing comprehensive vendor AI due diligence assessments.
At the governance professional level (AITGP, Level 3), you will develop the skills to design enterprise-scale AI supply chain governance architectures, implement AI-BOM standards, and build continuous monitoring frameworks for third-party AI.
At the leader level (AITL, Level 4), you will learn to provide board-level oversight of third-party AI risk, shape industry standards for AI supply chain governance, and build predictive supply chain risk management capabilities.
The journey begins with awareness. If you take only one action after reading this article, take this: find out how many AI systems are operating in your enterprise that your governance program does not cover. The answer will be larger than you expect, and it will make the case for Domain 20 more compellingly than any article can.
Previous in the Domain 20 series: Article 11 — AI Supply Chain Governance: The Missing Domain (Module 1.3) Next in the Domain 20 series: Article 16 — COMPEL for Procured AI: Adapting the Methodology (Module 2.6)