Skip to main content

COMPEL Glossary / experiment-tracking

Experiment tracking

The infrastructure and practice of recording artifacts, metrics, parameters, environment, and lineage for every experiment run — enabling later reproduction, comparison across runs, and audit.

What this means in practice

Governance artifact: without tracked runs, published model-card performance claims cannot be evidenced, and regulatory Annex IV documentation is incomplete.

Synonyms

ML experiment tracking , run logging

See also

  • Reproducibility — The property that re-running an experiment with the same code, data, and configuration produces the same results within declared tolerance.
  • Replicability — The property that an independent team reproduces the qualitative conclusions of an experiment using different data, tooling, or implementation.
  • Technical documentation (Annex IV) — The mandatory documentation that the provider of a high-risk AI system must draw up before placing it on the market.

Related articles in the Body of Knowledge