COMPEL Glossary / in-context-learning
In-Context Learning
In-context learning is the simplest form of AI agent adaptation, where the model adapts its behavior based on information in its current context window -- the conversation history, task instructions, retrieved documents, and tool outputs -- without changing its underlying model weights.
What this means in practice
When an agent retrieves a customer's previous interactions and personalizes its response, or adjusts its tone after receiving feedback that its first draft was too formal, it is learning in context. In-context learning is ephemeral: when the session ends, the learning disappears. This makes it the least risky form of adaptation, but it still creates governance considerations: context contamination (inaccurate information in context causes inappropriate actions), prompt injection (malicious content in retrieved documents manipulates behavior), and context window limitations (important instructions pushed out as conversations grow long).
Why it matters
In-context learning is the least risky form of AI adaptation because it is ephemeral -- when the session ends, the learning disappears. However, it still creates governance considerations including context contamination from inaccurate retrieved information, prompt injection from malicious content in documents, and context window limitations that can push important instructions out of scope as conversations grow long.
How COMPEL uses it
In-context learning is assessed as part of the Technology pillar during Calibrate when evaluating how AI systems adapt to tasks. During Model, governance requirements for in-context learning are designed, including input validation, context quality controls, and safeguards against prompt injection. The Evaluate stage monitors context-related failures, and the Agent Governance layer in Module 3.4 addresses context management as part of the adaptation governance spectrum.
Related Terms
Other glossary terms mentioned in this entry's definition and context.