The COMPEL Glossary Graph visualizes relationships between framework terminology, showing how concepts interconnect across domains, stages, and pillars. Term nodes cluster by pillar affiliation while cross-references reveal semantic dependencies — for example, how risk appetite connects to control effectiveness, model governance, and assurance requirements. This network representation helps practitioners navigate the framework vocabulary and understand that COMPEL terminology forms a coherent conceptual system rather than isolated definitions.
COMPEL Glossary / distillation
Distillation
The training of a smaller "student" model to imitate a larger "teacher" model's behaviour — typically on a shared dataset of prompts and teacher outputs.
What this means in practice
Produces a model with lower latency and cost at some quality loss; used to convert expensive frontier models into deployable production workloads.
Synonyms
knowledge distillation , student-teacher training
See also
- PEFT (parameter-efficient fine-tuning) — A family of fine-tuning techniques — most prominently LoRA, QLoRA, and adapters — that update only a small fraction of model parameters while freezing the rest.
- Quantization (AI cost) — Representation of model weights (and sometimes activations) at lower numerical precision — INT8, INT4, or mixed-precision — to reduce memory footprint and accelerate inference.
- Fine-Tuning — Fine-tuning is the process of further training a pre-trained AI model on a specific dataset to adapt it for a particular task or domain.