COMPEL Certification Body of Knowledge — Module 2.6: Industry Applications and Regulatory Alignment Article 16 — Domain 20: AI Supply Chain and Third-Party Governance
Why COMPEL Needs a Procured AI Adaptation
The COMPEL methodology — Calibrate, Organize, Model, Produce, Evaluate, Learn — was designed as a universal AI transformation framework. Its six stages provide a structured approach to assessing organizational readiness, building governance structures, designing target-state architectures, executing transformation initiatives, measuring outcomes, and capturing knowledge for continuous improvement.
However, the original COMPEL stage definitions assume a significant degree of control over the AI systems being governed. Calibrate assumes you can assess your AI development capabilities. Organize assumes you can structure teams and processes around AI delivery. Model assumes you can design the target-state AI architecture. Produce assumes you can build and deploy AI systems. Evaluate assumes you can measure the performance of AI you control. Learn assumes you can feed insights back into your AI development process.
When the AI is procured rather than built, each of these assumptions requires adaptation. You cannot calibrate what you do not control. You cannot organize around a development process that happens at the vendor’s site. You cannot model an architecture when the model is a black box. You cannot produce AI that comes pre-built. You cannot evaluate against benchmarks when you do not know the model’s training methodology. You cannot feed learnings back into a development process you have no access to.
This does not mean COMPEL is inapplicable to procured AI. It means COMPEL must be deliberately adapted, stage by stage, to the realities of governing AI you buy rather than AI you build. This article provides that adaptation.
Calibrate: Vendor Landscape Mapping and Third-Party AI Exposure Assessment
In the standard COMPEL model, the Calibrate stage assesses the organization’s current AI maturity across the 20 domains of the maturity model. For procured AI governance, Calibrate focuses on understanding the organization’s current third-party AI exposure and its capacity to govern it.
Third-Party AI Inventory
The first Calibrate activity is creating a comprehensive inventory of all third-party AI operating within the enterprise. This inventory must go beyond the AI systems that the governance function already knows about. It must discover the AI that arrived as embedded features in SaaS platforms, the AI that individual employees adopted, and the AI that development teams integrated via APIs.
The inventory should capture, for each identified AI system:
- Vendor identity. Who provides the AI? This includes both the direct vendor (e.g., Salesforce) and the underlying AI provider (e.g., OpenAI, which powers some Salesforce Einstein capabilities).
- AI capability description. What does the AI do? Classification, generation, prediction, recommendation, automation, or some combination.
- Data inputs. What organizational data does the AI consume? Customer data, employee data, financial data, operational data, intellectual property.
- Decision outputs. What decisions does the AI make or influence? Hiring recommendations, credit decisions, customer routing, content generation, risk scoring.
- User population. Who uses the AI? Internal employees, customers, partners, or automated systems.
- Deployment context. Where in the organization is the AI used? Which departments, business units, geographies, and processes.
- Activation history. When was the AI enabled? By whom? Through what process?
- Contractual basis. Under what agreement is the AI provided? Is AI explicitly covered in the contract?
Vendor Landscape Mapping
Beyond individual AI systems, Calibrate assesses the vendor landscape holistically. This mapping identifies:
Concentration points. Are multiple critical AI capabilities dependent on the same vendor or the same foundation model? If three different enterprise platforms all use the same underlying language model, a single model failure could impair three business functions simultaneously.
Supply chain depth. For each direct vendor, who are the upstream AI providers? If the organization’s CRM vendor uses an AI model from a foundation model provider, and that foundation model provider uses training data from a data broker, the organization has a three-tier AI supply chain that it may not be aware of.
Criticality assessment. Which procured AI systems are critical to business operations? Which ones could be disabled without significant impact? Which ones would require immediate remediation if they failed or produced biased outputs?
Governance readiness. For each vendor, how transparent is the vendor about its AI? Does the vendor publish model cards? Does the vendor provide bias testing results? Does the vendor offer AI-BOM documentation? Does the vendor have an AI ethics or responsible AI program?
Current Governance Gap Analysis
The Calibrate stage concludes with an honest assessment of the gap between the organization’s current governance coverage and its actual third-party AI exposure. This gap analysis typically reveals several categories of findings:
- AI systems operating without any governance coverage
- AI systems covered by general vendor governance but lacking AI-specific governance
- AI systems that have been assessed once but are not continuously monitored
- AI-specific risks (bias, transparency, data leakage) not addressed by existing vendor risk frameworks
- Contractual provisions that do not cover AI-specific obligations
This gap analysis provides the foundation for the Organize stage, which builds the structures needed to close the identified gaps.
Organize: Building the Vendor Governance Framework
In the standard COMPEL model, Organize builds the teams, processes, and governance structures needed for AI transformation. For procured AI, Organize focuses on establishing the organizational capacity to govern third-party AI effectively.
Roles and Responsibilities
Procured AI governance requires clear assignment of responsibilities across multiple organizational functions:
AI Governance Function. Sets the policies and standards for third-party AI governance. Defines the AI-specific risk criteria. Maintains the AI inventory. Conducts or commissions AI-specific vendor assessments. Reports on third-party AI risk to leadership.
Procurement Function. Incorporates AI-specific requirements into vendor selection, evaluation, and contracting processes. Ensures AI governance review is triggered when procuring AI-enabled products. Negotiates AI-specific contractual terms.
Legal Function. Reviews and drafts AI-specific contractual provisions. Assesses regulatory obligations related to procured AI. Advises on liability and indemnification for AI-related incidents.
Information Security Function. Assesses the security posture of AI vendors. Evaluates data protection measures for organizational data processed by vendor AI. Monitors for AI-specific security threats (prompt injection, model extraction, data poisoning).
Business Unit Leaders. Accountable for the AI risk within their units, including procured AI. Responsible for ensuring that procured AI within their units is registered in the AI inventory and subject to governance review.
IT Administrators. Manage the activation and configuration of AI features within enterprise platforms. Report AI feature enablement to the governance function. Implement technical controls (access restrictions, data loss prevention, logging) for procured AI.
Policy Framework
The Organize stage establishes the policy framework for procured AI governance. Key policies include:
AI Procurement Policy. Requires AI-specific assessment for any procurement involving AI capabilities. Defines the assessment criteria, approval authority, and documentation requirements. Establishes thresholds for when AI governance review is mandatory versus advisory.
Shadow AI Policy. Defines what constitutes unauthorized AI use. Establishes the process for discovering, assessing, and dispositioning unauthorized AI. Provides a path for sanctioning currently-unauthorized AI that passes governance review, rather than simply prohibiting all unapproved AI.
Third-Party AI Monitoring Policy. Requires ongoing monitoring of procured AI systems, not just point-in-time assessment. Defines monitoring frequency, metrics, and escalation triggers. Establishes the process for responding to vendor AI updates that change the risk profile.
AI Vendor Contractual Requirements. Defines the AI-specific terms that must be included in vendor contracts. Covers transparency, bias testing, incident notification, data usage, model update notification, and audit rights.
Process Design
The Organize stage designs the operational processes for procured AI governance:
AI Procurement Gate. A mandatory checkpoint in the procurement process for AI-enabled products. Triggers an AI-specific risk assessment before procurement approval.
AI Feature Enablement Process. A process for reviewing and approving the activation of AI features in existing enterprise platforms. Ensures that new AI features are assessed before they are enabled, not after.
Shadow AI Discovery Process. A periodic process for identifying unauthorized AI usage across the enterprise. Combines technical discovery (network traffic analysis, SaaS management platform scanning, API usage monitoring) with organizational discovery (user surveys, department interviews, budget review).
Vendor AI Incident Response Process. A process for responding when a vendor’s AI system produces harmful, biased, or inaccurate outputs. Defines escalation paths, communication protocols, remediation steps, and documentation requirements.
Model: Integration Architecture Governance
In the standard COMPEL model, the Model stage designs the target-state architecture for AI transformation. For procured AI, Model focuses on designing how third-party AI integrates with the enterprise’s governance architecture.
Governance Architecture for Procured AI
The target-state architecture must address how procured AI systems are governed throughout their lifecycle within the enterprise. Key architectural components include:
AI Inventory System. A central registry of all AI systems — built and procured — operating within the enterprise. This registry is the single source of truth for the organization’s AI footprint. It must support automated discovery (integration with SaaS management platforms, network monitoring tools, and identity management systems) and manual registration (for AI systems that cannot be automatically discovered).
Risk Assessment Framework. A standardized framework for assessing the risk of procured AI systems. The framework must accommodate the limited transparency that characterizes many vendor AI systems. It must include proxy indicators — vendor reputation, published responsible AI commitments, third-party audits, certification status — when direct assessment is not possible.
Monitoring Infrastructure. Technical infrastructure for continuously monitoring the behavior of procured AI systems. This includes output sampling (periodically capturing and analyzing AI outputs for bias, accuracy, and appropriateness), performance monitoring (tracking response times, error rates, and availability), and change detection (identifying when a vendor updates its AI models or changes their behavior).
Integration Controls. Technical controls that govern how procured AI systems access organizational data and how their outputs are used. These controls include data access restrictions (limiting what data the AI can access), output validation (human review of AI outputs before they are acted upon), logging (capturing AI inputs and outputs for audit and monitoring), and kill switches (the ability to disable AI features rapidly if needed).
Data Flow Architecture
A critical aspect of the Model stage is designing the data flows between the enterprise and its AI vendors. For each procured AI system, the architecture must specify:
- What data flows to the vendor’s AI system
- What processing the vendor performs on the data
- Whether and how the data is retained by the vendor
- What data flows back from the vendor’s AI to the enterprise
- Where data is processed geographically
- What encryption protects data in transit and at rest
- What data isolation guarantees the vendor provides (multi-tenant versus single-tenant, data partitioning)
This data flow architecture must be documented, reviewed, and maintained. When vendors update their AI systems or change their data practices, the data flow architecture must be updated to reflect the new reality.
Produce: Vendor Monitoring and Operational Governance
In the standard COMPEL model, Produce is the execution stage — building and deploying AI systems. For procured AI, Produce focuses on operationalizing the governance framework designed in the Model stage.
Vendor Onboarding
When a new AI vendor is approved through the procurement gate, the Produce stage manages the onboarding process:
- Register the AI system in the AI inventory
- Conduct the initial AI-specific risk assessment
- Negotiate and execute AI-specific contractual terms
- Configure technical controls (data access restrictions, logging, monitoring)
- Establish monitoring baselines (initial performance benchmarks, initial output analysis)
- Train affected users on the AI system’s capabilities, limitations, and governance requirements
- Document the AI system’s integration architecture and data flows
Ongoing Monitoring
The core Produce activity for procured AI governance is continuous monitoring. This monitoring operates at multiple levels:
Output monitoring. Periodically sample and analyze the AI system’s outputs. For classification AI (ticket routing, lead scoring, risk assessment), measure accuracy, consistency, and fairness across relevant demographic groups. For generative AI (content drafting, code generation, summarization), assess quality, accuracy, appropriateness, and alignment with organizational standards. For recommendation AI (next-best-action, product recommendation, content personalization), evaluate relevance, diversity, and potential for filter bubbles or reinforcement of existing biases.
Performance monitoring. Track the AI system’s operational performance — response times, error rates, availability, throughput. Degradation in operational performance may indicate model issues, infrastructure problems, or increased load that affects output quality.
Change detection. Monitor for changes in the vendor’s AI system. This includes explicit changes (announced model updates, terms-of-service changes, feature modifications) and implicit changes (observed shifts in output patterns, accuracy changes, behavioral differences). When changes are detected, trigger a reassessment to determine whether the change affects the risk profile.
Compliance monitoring. Verify that the vendor continues to meet its contractual obligations regarding AI transparency, bias testing, data handling, and incident notification. Review the vendor’s published responsible AI reports, audit results, and certification status.
Incident Response
When a procured AI system produces problematic outputs — biased decisions, inaccurate results, privacy violations, or harmful content — the incident response process activates:
- Contain. Determine whether the AI feature should be disabled or restricted while the incident is investigated. The kill switch capability designed in the Model stage enables rapid containment.
- Investigate. Determine the scope and root cause of the incident. Was it a one-time anomaly or a systemic issue? Was it caused by a model update, a data change, or a pre-existing deficiency?
- Notify. Notify the vendor of the incident and request explanation and remediation. Notify internal stakeholders (legal, compliance, affected business units) as appropriate. Notify regulators if required.
- Remediate. Work with the vendor to remediate the issue. This may involve model rollback, configuration changes, additional bias testing, or enhanced monitoring.
- Document. Document the incident, investigation findings, vendor response, remediation actions, and lessons learned. Update the AI inventory to reflect any changes in the system’s risk profile.
Evaluate: Vendor Performance Assessment
In the standard COMPEL model, Evaluate measures transformation outcomes against defined metrics. For procured AI, Evaluate assesses whether the vendor’s AI is delivering the expected value while remaining within acceptable risk parameters.
Performance Assessment Framework
The Evaluate stage applies a structured assessment framework to each procured AI system. The framework measures:
Value delivery. Is the AI system delivering the business value that justified its procurement? Are users adopting it? Is it improving the processes it was intended to improve? Are the expected efficiency gains, accuracy improvements, or revenue impacts materializing?
Risk performance. Is the AI system operating within acceptable risk parameters? Are bias metrics within tolerance? Are accuracy metrics meeting contractual commitments? Are data handling practices compliant with organizational requirements?
Vendor relationship quality. Is the vendor meeting its transparency obligations? Is the vendor responsive to governance inquiries? Is the vendor proactive about sharing model updates, bias testing results, and incident information?
Governance effectiveness. Is the organization’s governance of the procured AI system effective? Are monitoring processes catching issues? Are incident response processes working? Is the AI inventory accurate and current?
Periodic Vendor Reviews
The Evaluate stage conducts periodic vendor reviews that go beyond traditional vendor business reviews. AI-specific vendor reviews include:
- Review of the vendor’s responsible AI program and any changes since the last review
- Discussion of model updates made during the review period and their impact
- Review of any AI-related incidents and the vendor’s response
- Assessment of the vendor’s compliance with AI-specific contractual terms
- Discussion of upcoming changes to the vendor’s AI capabilities and their governance implications
- Benchmarking the vendor’s AI governance practices against industry standards and evolving regulatory requirements
Escalation and Remediation
When Evaluate identifies significant issues — persistent bias, inadequate transparency, unannounced model changes, or contractual non-compliance — the framework provides a structured escalation path:
- Working-level engagement. Raise the issue with the vendor’s account team and technical support.
- Management escalation. If working-level engagement does not resolve the issue, escalate to the vendor’s management, including the vendor’s responsible AI or governance leadership.
- Contractual remediation. If management escalation does not resolve the issue, invoke contractual remediation provisions — service level credits, corrective action plans, or formal breach notification.
- Strategic reassessment. If contractual remediation does not resolve the issue, reassess the vendor relationship. Consider alternative vendors, alternative configurations, or disabling the problematic AI features.
Learn: Vendor Relationship Optimization
In the standard COMPEL model, Learn captures knowledge and feeds it back into the cycle for continuous improvement. For procured AI, Learn focuses on improving the organization’s vendor governance capabilities and optimizing its vendor relationships.
Knowledge Capture
The Learn stage systematically captures knowledge from the organization’s experience governing procured AI:
Assessment insights. What was learned during vendor assessments? Which assessment questions produced the most useful information? Which vendor responses were indicators of strong or weak AI governance? How should the assessment methodology be refined?
Monitoring insights. What monitoring approaches were most effective at detecting issues? What monitoring gaps were identified through incidents? How should monitoring processes and tools be enhanced?
Incident insights. What was learned from AI incidents? What patterns emerged across incidents? What preventive measures could reduce future incident frequency or severity? How should incident response processes be improved?
Contractual insights. Which contractual provisions were most effective at ensuring vendor accountability? Which provisions were difficult to enforce? What new provisions should be added to address emerging risks?
Vendor Ecosystem Development
The Learn stage also focuses on improving the broader vendor ecosystem’s AI governance maturity:
Vendor feedback. Provide structured feedback to vendors on their AI governance practices. Share specific, actionable observations about what the vendor does well and where improvement is needed. This feedback benefits the organization (by improving vendor practices) and the vendor (by providing market signal about governance expectations).
Industry engagement. Participate in industry forums, working groups, and standards bodies that address AI supply chain governance. Share lessons learned (appropriately anonymized) with peers. Contribute to the development of industry standards for AI vendor assessment, AI-BOM documentation, and AI supply chain transparency.
Internal capability building. Use the knowledge captured through the COMPEL cycle to build internal expertise in procured AI governance. Develop training programs, assessment tools, and governance playbooks that institutionalize the organization’s learning.
Cycle Iteration
The Learn stage feeds directly back into Calibrate, initiating the next cycle of the COMPEL process. Each cycle should produce measurable improvement in the organization’s procured AI governance maturity:
- A more comprehensive AI inventory (discovered through improved shadow AI detection)
- More effective vendor assessments (refined through assessment insights)
- Better monitoring coverage (expanded through monitoring insights)
- Stronger contractual protections (improved through contractual insights)
- More efficient governance processes (optimized through operational experience)
The COMPEL cycle is continuous. The AI vendor landscape evolves constantly — new vendors, new AI capabilities, new regulatory requirements, new risk categories. The organization’s procured AI governance must evolve at least as fast as the risks it manages. The COMPEL cycle provides the structured approach to ensuring this continuous evolution.
Previous in the Domain 20 series: Article 13 — Third-Party AI: The Governance Challenge You Are Not Seeing (Module 1.4) Next in the Domain 20 series: Article 17 — Shadow AI Discovery and Inventory Methodology (Module 2.6)