COMPEL Glossary / foundation-model
Foundation Model
A foundation model is a large pre-trained AI model that serves as a base for multiple downstream applications.
What this means in practice
Instead of building separate models for sentiment analysis, summarization, and question answering, a single foundation model can be adapted for all three through fine-tuning or prompting. This represents a fundamental shift in how AI is built: from custom development for every use case to platform-based adaptation. Foundation models democratize AI by enabling domain experts to build capabilities through prompts rather than code. However, they also create concentration risk -- a small number of providers (OpenAI, Anthropic, Google, Meta) power the majority of enterprise generative AI, raising questions about vendor dependency, data privacy, and strategic autonomy that governance frameworks must address.
Why it matters
Foundation models represent a fundamental shift from custom model development to platform-based adaptation. A single model can be adapted for dozens of tasks through prompting or fine-tuning, democratizing AI by enabling domain experts to build capabilities without deep technical expertise. However, dependence on a small number of providers creates concentration risk that governance frameworks must address regarding vendor dependency, data privacy, and strategic autonomy.
How COMPEL uses it
Foundation model capabilities and risks are assessed within the Technology pillar during Calibrate. During Model, foundation model strategy is designed, addressing vendor selection, deployment patterns (API vs. self-hosted), and governance controls for prompt management and data handling. The Governance pillar ensures vendor dependency risks are documented in the risk register, and the Evaluate stage tracks foundation model costs and performance.
Related Terms
Other glossary terms mentioned in this entry's definition and context.