The COMPEL Glossary Graph visualizes relationships between framework terminology, showing how concepts interconnect across domains, stages, and pillars. Term nodes cluster by pillar affiliation while cross-references reveal semantic dependencies — for example, how risk appetite connects to control effectiveness, model governance, and assurance requirements. This network representation helps practitioners navigate the framework vocabulary and understand that COMPEL terminology forms a coherent conceptual system rather than isolated definitions.
COMPEL Glossary / experiment-tracking
Experiment tracking
The infrastructure and practice of recording artifacts, metrics, parameters, environment, and lineage for every experiment run — enabling later reproduction, comparison across runs, and audit.
What this means in practice
Governance artifact: without tracked runs, published model-card performance claims cannot be evidenced, and regulatory Annex IV documentation is incomplete.
Synonyms
ML experiment tracking , run logging
See also
- Reproducibility — The property that re-running an experiment with the same code, data, and configuration produces the same results within declared tolerance.
- Replicability — The property that an independent team reproduces the qualitative conclusions of an experiment using different data, tooling, or implementation.
- Technical documentation (Annex IV) — The mandatory documentation that the provider of a high-risk AI system must draw up before placing it on the market.