Skip to main content

COMPEL Glossary / retraining

Retraining

Retraining is the process of updating an AI model by training it on new or additional data to restore or improve its performance after drift, degradation, or changing business requirements.

What this means in practice

Retraining can be triggered on a fixed schedule (weekly, monthly), by detected drift exceeding defined thresholds, or by changes in business requirements. Mature MLOps pipelines automate retraining workflows -- from data preparation through model training, validation, and staged deployment -- with human-in-the-loop validation before production promotion. Retraining governance requires version control (every retrained model is versioned with documented training data), before-and-after evaluation (ensuring retraining improved target performance without regressing on other capabilities), and staged deployment (canary or shadow patterns to validate in production before full rollout).

Why it matters

AI models degrade over time through data drift, changing business requirements, and evolving user behavior. Organizations without mature retraining processes operate increasingly unreliable systems that silently deliver worse results. Establishing automated, governed retraining workflows with version control and staged deployment ensures models remain accurate and compliant throughout their operational lifecycle.

How COMPEL uses it

Retraining governance is part of the Process pillar's MLOps maturity (Domain 7), assessed during Calibrate and built during Produce. The Model stage defines retraining triggers, validation requirements, and deployment procedures. The Evaluate stage monitors model performance to detect when retraining is needed, and version control ensures every retrained model is documented with training data provenance for audit purposes.

Related Terms

Other glossary terms mentioned in this entry's definition and context.