COMPEL Certification Body of Knowledge — Module 2.6: Industry Applications and Regulatory Alignment Article 17 — Domain 20: AI Supply Chain and Third-Party Governance
The Discovery Imperative
You cannot govern what you do not know exists. This principle, fundamental to all governance disciplines, takes on particular urgency in the context of AI. Shadow AI — AI systems operating within the enterprise without the knowledge or approval of the AI governance function — represents the single largest unmanaged risk category in most organizations’ AI portfolios.
Shadow AI is not a fringe phenomenon. Industry surveys consistently indicate that the majority of AI usage within large enterprises occurs outside formal governance oversight. Salesforce’s 2025 enterprise AI adoption research found that 54 percent of employees report using generative AI tools at work, with a substantial portion doing so without organizational approval. Microsoft’s 2025 Work Trend Index revealed that 78 percent of knowledge workers bring their own AI tools to work, with the majority not sanctioned by IT. McKinsey’s State of AI surveys document a persistent gap between the AI that organizations formally manage and the AI that actually operates within their boundaries.
The shadow AI problem is structurally different from the shadow IT problem that preceded it. Shadow IT typically involved discrete applications — a team using Dropbox instead of the corporate file share, a department adopting Slack before it was officially sanctioned. These applications were relatively contained and their risks were relatively bounded. Shadow AI is different because AI is embedded within other tools rather than existing as standalone applications, because AI processes organizational data in ways that create compliance and intellectual property risks, and because AI makes or influences decisions that may have legal, ethical, and reputational consequences.
This article presents a five-step methodology for discovering shadow AI within the enterprise, building a comprehensive AI inventory, scoring the risk of discovered AI, and executing a remediation playbook.
Step 1: Network and Technical Discovery
Technical discovery uses the enterprise’s existing infrastructure to identify AI usage that may not be visible through organizational channels.
Network Traffic Analysis
Modern AI services have identifiable network signatures. API calls to OpenAI (api.openai.com), Anthropic (api.anthropic.com), Google AI (generativelanguage.googleapis.com), Cohere (api.cohere.com), and other AI providers can be detected through network monitoring tools, firewalls, proxies, and DNS logs.
The analysis should identify:
Direct API access. Employees or applications making direct API calls to AI providers. This indicates custom AI integrations or individual AI tool usage that may not have been formally approved. Look for traffic to known AI API endpoints. Review DNS query logs for AI provider domains. Examine proxy logs for requests to AI service URLs.
Embedded AI traffic. Enterprise SaaS platforms making API calls to AI providers on behalf of the organization. When Salesforce Einstein calls an OpenAI API, the traffic originates from Salesforce’s infrastructure, but the data being processed is the organization’s customer data. This traffic may be visible in SaaS management platform (SMP) logs or through API activity reports provided by the SaaS vendor.
Browser-based AI usage. Employees accessing AI tools through web browsers. Web proxy logs and secure web gateway logs can identify access to AI tool websites: chat.openai.com, claude.ai, gemini.google.com, perplexity.ai, and similar services. This discovery method identifies individual AI tool usage that bypasses the enterprise’s formal procurement processes.
AI plugin and extension usage. Employees installing AI-powered browser extensions, IDE plugins, or application add-ins. Endpoint management tools can detect installed extensions. Application store management can identify approved versus unapproved AI extensions. Look for popular AI extensions: GitHub Copilot, Grammarly, Jasper, Otter.ai, and similar tools.
Technical Constraints and Ethical Considerations
Network traffic analysis must be conducted within the organization’s existing monitoring policies and employee privacy obligations. In many jurisdictions, particularly within the EU under GDPR, employee monitoring requires transparency, purpose limitation, and proportionality. The discovery program should:
- Operate within the organization’s existing acceptable use and monitoring policies
- Focus on identifying AI services and traffic patterns, not monitoring individual employee behavior
- Aggregate findings at the service level (e.g., “142 users accessed ChatGPT in the past 30 days”) rather than the individual level unless investigation of a specific compliance concern is warranted
- Engage the legal and HR functions to ensure compliance with applicable monitoring regulations and employment law
SaaS Management Platform Integration
Organizations that use SaaS management platforms (SMPs) such as Zylo, Productiv, Torii, or Zluri have an existing discovery infrastructure that can be extended to AI. These platforms typically:
- Discover SaaS applications through SSO integration, browser extension telemetry, expense report analysis, and financial system integration
- Categorize applications by function, including an AI category
- Track license utilization and user activity
- Identify applications that were adopted outside formal procurement channels
Extend the SMP configuration to:
- Flag all applications categorized as AI or containing AI features
- Create custom categories for AI sub-types: generative AI, analytical AI, automation AI, AI development tools
- Configure alerts when new AI applications are detected
- Generate reports showing AI application adoption trends over time
Step 2: SaaS and Platform Audit
While network discovery identifies AI usage from the infrastructure perspective, the SaaS and platform audit examines AI capabilities from the application perspective.
Enterprise Platform AI Feature Inventory
For each major enterprise platform, conduct a comprehensive audit of AI capabilities:
Microsoft 365. Audit Copilot licensing and activation. Review admin center settings for AI feature enablement. Check Teams AI settings (meeting transcription, intelligent recap, Copilot in chat). Review SharePoint AI settings (content recommendations, search intelligence). Check Power Platform AI Builder configuration. Review Viva Insights AI features. Document which AI features are enabled, which are available but disabled, and which are restricted.
Salesforce. Audit Einstein feature activation across Sales Cloud, Service Cloud, Marketing Cloud, and Commerce Cloud. Review Einstein GPT enablement. Check Data Cloud AI configuration. Review AppExchange for installed AI-powered packages. Document which Einstein features are active and which objects and fields they access.
ServiceNow. Audit Now Intelligence feature activation. Review Virtual Agent configuration and training data. Check Predictive Intelligence settings. Review Performance Analytics AI features. Document which AI features are active and what ITSM, HRSD, or CSM data they process.
SAP. Audit SAP AI Core and SAP AI Launchpad configuration. Review embedded AI features in S/4HANA, SuccessFactors, Ariba, and Concur. Check SAP Business AI feature activation. Document which AI capabilities are active and which business processes they support.
Workday. Audit AI and ML feature activation across HCM, Financial Management, and Adaptive Planning. Review Workday Skills Cloud AI usage. Check Workday Recruiting AI features. Document which AI features are active and what HR and financial data they access.
For each identified AI feature, document:
- Feature name and description
- Activation status (active, available, disabled)
- Date of activation and who activated it
- Data accessed by the feature
- Decisions made or influenced by the feature
- Configuration options and current settings
- Vendor documentation on the AI’s methodology, training data, and limitations
Departmental AI Procurement Audit
Review departmental budgets and expense reports for AI-specific purchases. AI tools are often procured at the departmental level, using departmental budgets and procurement authority that may not trigger enterprise-level review. Look for:
- Subscriptions to AI platforms (OpenAI, Anthropic, Jasper, Copy.ai, Runway, Midjourney, ElevenLabs)
- AI-powered point solutions (AI writing assistants, AI presentation tools, AI research tools, AI coding assistants)
- AI development tools (model hosting, training platforms, labeling services)
- AI-powered analytics and business intelligence tools
- AI-powered customer service and chatbot platforms
This audit typically requires collaboration with Finance (to identify AI-related spend) and Procurement (to identify AI-related purchase orders and contracts).
Step 3: Organizational Discovery
Technical and SaaS audit methods discover AI systems, but organizational discovery methods discover AI usage patterns, motivations, and contexts that technical methods cannot capture.
Structured User Surveys
Design and deploy a survey to a representative sample of employees across all business units, functions, and levels. The survey should:
Identify AI usage. Ask employees what AI tools they use for work, including both approved and unapproved tools. Frame the question neutrally — the goal is discovery, not enforcement. Make clear that the purpose is to understand the organization’s AI landscape, not to punish unauthorized usage.
Understand use cases. For each identified AI tool, ask how the employee uses it. What tasks does it perform? What data does it process? What decisions does it influence? How frequently is it used? How critical is it to the employee’s work?
Capture value and risk perceptions. Ask employees what value the AI tool provides and what concerns they have about it. Employees often have intuitive insights about AI risks — they notice when an AI produces inaccurate outputs, when it seems biased, or when it behaves unpredictably.
Identify unmet needs. Ask what AI capabilities employees wish they had. This information informs the organization’s AI strategy and helps channel demand toward governed AI solutions rather than shadow AI adoption.
Sample survey questions:
- Which AI tools or AI-powered features do you use for work? (Select all that apply: list of known AI tools, plus “Other” with free text)
- For each tool selected: How often do you use it? (Daily / Weekly / Monthly / Rarely)
- For each tool selected: What work tasks do you use it for? (Free text)
- For each tool selected: What types of organizational data do you share with it? (Customer data / Employee data / Financial data / Internal documents / Code / None / Other)
- Do you use any AI tools for work that were not listed above? (Free text)
- What AI capabilities would help you do your work better? (Free text)
Department Interviews
Conduct structured interviews with department heads and team leaders to identify AI usage patterns within their organizations. Focus on:
- AI tools that the department has adopted as standard practice
- AI capabilities embedded in department-specific software
- AI experiments or pilots that the department is conducting
- AI needs that the department has identified but not yet addressed
- Concerns about AI risks within the department’s operations
IT and Development Team Interviews
Interview IT administrators and development teams to identify:
- AI APIs integrated into internal applications
- AI-powered development tools in use (GitHub Copilot, Amazon CodeWhisperer, Tabnine, Cursor)
- AI services running in the organization’s cloud infrastructure
- AI models hosted on organizational infrastructure
- AI libraries and frameworks included in application dependencies
Step 4: Risk Scoring Methodology for Discovered AI
Once the discovery process is complete, the organization will have an inventory of AI systems — both sanctioned and unsanctioned — that must be assessed and prioritized. A structured risk scoring methodology enables consistent, comparable assessment of each discovered AI system.
Risk Scoring Dimensions
Score each discovered AI system across five dimensions, each on a scale of 1 (low risk) to 5 (critical risk):
Decision impact (DI). What is the consequence of the AI’s decisions on people and the organization?
- 1: Informational only (content recommendation, search ranking)
- 2: Operational efficiency (task automation, process optimization)
- 3: Business decisions (lead scoring, demand forecasting, pricing)
- 4: People decisions (hiring screening, performance assessment, credit evaluation)
- 5: Safety or legal decisions (medical diagnosis support, regulatory compliance, criminal justice)
Data sensitivity (DS). How sensitive is the data the AI processes?
- 1: Public data only
- 2: Internal business data (non-confidential)
- 3: Confidential business data (financial, strategic, competitive)
- 4: Personal data (employee PII, customer PII)
- 5: Special category data (health, biometric, racial/ethnic, political, religious)
Population scope (PS). How many people are affected by the AI’s outputs?
- 1: Individual user only
- 2: Team or department
- 3: Business unit or function
- 4: Entire organization
- 5: External stakeholders (customers, public, regulated populations)
Governance maturity (GM, inverse scoring). How mature is the governance surrounding the AI?
- 1: Fully governed (comprehensive assessment, contractual terms, monitoring, incident response)
- 2: Substantially governed (assessment completed, some monitoring, basic contractual terms)
- 3: Partially governed (basic assessment, limited contractual terms, no monitoring)
- 4: Minimally governed (acknowledged but not assessed)
- 5: Ungoverned (unknown to governance function prior to discovery)
Regulatory exposure (RE). What regulatory obligations apply to the AI’s context of use?
- 1: No specific AI regulation applicable
- 2: General data protection (GDPR, CCPA) applies
- 3: Sector-specific regulation applies (HIPAA, SOX, PCI-DSS)
- 4: AI-specific regulation applies (EU AI Act limited risk)
- 5: High-risk AI regulation applies (EU AI Act high risk, EEOC employment AI)
Composite Risk Score
Calculate the composite risk score as: CRS = (DI + DS + PS + GM + RE) / 5
This produces a score from 1.0 to 5.0 that enables consistent prioritization:
- 4.0 - 5.0 (Critical): Immediate governance action required. The AI system operates in a high-risk context with inadequate governance. Remediation must begin within 30 days.
- 3.0 - 3.9 (High): Priority governance action required. The AI system presents significant risk that current governance does not adequately address. Remediation must begin within 90 days.
- 2.0 - 2.9 (Medium): Planned governance action required. The AI system presents moderate risk that should be addressed through the normal governance cycle. Remediation should be completed within the next COMPEL cycle.
- 1.0 - 1.9 (Low): Standard governance applies. The AI system presents manageable risk that existing governance processes can address. Include in periodic review cycles.
Step 5: Remediation Playbook
The remediation playbook provides structured responses for each risk tier. Not every discovered AI system requires the same response. Some should be sanctioned and governed, some should be replaced with better-governed alternatives, and some should be blocked.
Sanction: Bring Under Governance
For discovered AI systems that provide legitimate business value and can be adequately governed, the remediation path is sanctioning — formally approving the AI system and bringing it under the governance framework.
Sanctioning involves:
- Conducting a formal AI risk assessment using the organization’s standard assessment framework
- Negotiating AI-specific contractual terms with the vendor (or confirming that existing terms are adequate)
- Configuring technical controls (data access restrictions, logging, monitoring)
- Registering the AI system in the AI inventory with complete documentation
- Establishing monitoring baselines and ongoing monitoring processes
- Training users on the AI system’s capabilities, limitations, and governance requirements
- Integrating the AI system into the organization’s incident response framework
Sanctioning is appropriate when:
- The AI system addresses a genuine business need
- The vendor provides adequate transparency about the AI system
- The risk can be managed through governance controls
- The contractual terms (existing or negotiated) provide adequate protection
- The AI system can be monitored effectively
Replace: Migrate to Governed Alternative
For discovered AI systems that provide legitimate business value but cannot be adequately governed — due to vendor opacity, inadequate contractual terms, or unacceptable risk levels — the remediation path is replacement. Identify a governed alternative that meets the same business need with acceptable governance characteristics.
Replacement involves:
- Identifying the business need that the current AI system addresses
- Evaluating alternative AI solutions that meet the business need with better governance characteristics
- Procuring the alternative through the standard AI procurement gate
- Planning and executing the migration from the ungoverned to the governed alternative
- Decommissioning the ungoverned AI system
- Verifying that users have transitioned to the governed alternative
Replacement is appropriate when:
- The business need is real but the current solution is ungovernable
- Governed alternatives exist that meet the business need
- The migration effort is proportionate to the risk reduction achieved
- Users can be effectively transitioned
Block: Prohibit Use
For discovered AI systems that present unacceptable risk — processing highly sensitive data without adequate protection, making high-impact decisions without oversight, operating in violation of regulatory requirements — the remediation path is blocking.
Blocking involves:
- Communicating the prohibition to affected users with a clear explanation of why the AI system is being blocked
- Implementing technical controls to prevent access (URL blocking, API blocking, application restriction)
- Providing alternative tools or processes for the work the blocked AI system was performing
- Verifying that the block is effective and that usage has ceased
- Monitoring for attempts to circumvent the block
Blocking is appropriate when:
- The AI system presents risk that cannot be mitigated through governance controls
- The AI system operates in violation of regulatory requirements
- The vendor is unwilling or unable to provide adequate transparency and accountability
- The potential harm from continued use outweighs the business value
Govern: Enhanced Oversight for Existing Sanctioned AI
For AI systems that are already sanctioned but were found to have governance gaps — inadequate monitoring, incomplete documentation, missing contractual terms — the remediation path is governance enhancement.
This involves:
- Identifying the specific governance gaps
- Designing and implementing the governance enhancements needed to close the gaps
- Updating the AI inventory to reflect the enhanced governance posture
- Scheduling a follow-up assessment to verify that the enhancements are effective
Building the Sustainable Inventory
The discovery process described in this article is not a one-time project. It is the foundation of an ongoing inventory management practice. The AI landscape within the enterprise changes continuously — new AI features are released by existing vendors, new AI tools are adopted by employees, new AI capabilities are integrated by development teams.
A sustainable inventory practice requires:
Continuous discovery infrastructure. The technical discovery mechanisms (network monitoring, SaaS management platform integration, endpoint monitoring) should run continuously, not periodically. New AI usage should trigger automated alerts to the governance function.
Procurement integration. The AI procurement gate ensures that new AI procurements are assessed and registered in the AI inventory before deployment. This gate catches AI that enters through formal channels.
Periodic sweeps. Quarterly or semi-annual discovery sweeps — combining technical analysis, SaaS audit, and organizational discovery — catch AI that entered through informal channels since the last sweep.
Culture change. Ultimately, the most effective discovery mechanism is a culture in which employees voluntarily report their AI usage. This requires building trust — employees must believe that reporting AI usage leads to sanctioning and support, not punishment. The shadow AI policy should emphasize the path to sanctioning, not the prohibition.
The inventory is the foundation of all subsequent governance activities. Every assessment, monitoring process, incident response plan, and compliance report depends on knowing what AI is operating within the enterprise. Invest in discovery. Invest in inventory management. The governance program cannot succeed without them.
Previous in the Domain 20 series: Article 16 — COMPEL for Procured AI: Adapting the Methodology (Module 2.6) Next in the Domain 20 series: Article 18 — Vendor AI Due Diligence: The Comprehensive Assessment (Module 2.6)