COMPEL Glossary / mlops
MLOps
MLOps (Machine Learning Operations) is the set of practices, tools, and cultural patterns that enable organizations to deploy, monitor, and maintain machine learning models in production reliably and at scale.
What this means in practice
Drawing on DevOps principles, MLOps encompasses the entire ML lifecycle: data pipeline management, model training automation, experiment tracking, model versioning, deployment orchestration, production monitoring, drift detection, and model retirement. Enterprise MLOps requires integration with governance workflows that may not exist in developer-centric MLOps toolchains.
Why it matters
Without MLOps practices, organizations cannot sustainably operate more than a handful of AI models in production. Model performance degrades over time due to data drift; manual deployment processes create bottlenecks and errors; governance controls applied at training time evaporate in production. MLOps is the operational infrastructure that makes AI governance continuous rather than point-in-time. Organizations with mature MLOps practices report 2-3x faster model deployment cycles and significantly reduced production incidents.
How COMPEL uses it
COMPEL treats MLOps as Domain D7 — one of the five Process domains in the Body of Knowledge. The COMPEL maturity model assesses MLOps capability across five levels from ad hoc model deployment (Level 1) to automated ML pipelines with governance checkpoints integrated into CI/CD (Level 5). COMPEL's Produce stage includes MLOps control configuration as a required production readiness element. The Evaluate stage monitors MLOps metrics as part of the governance scorecard.
Common mistakes
- Adopting MLOps tools without defining the governance workflows they need to support.
- Treating MLOps as purely a data science concern rather than an enterprise engineering and governance discipline.
- Implementing model deployment automation without corresponding monitoring and drift detection.
- Ignoring model retirement — models that are no longer actively maintained still make decisions in production.
See also
- Shadow AI — Shadow AI refers to AI tools, models, and AI-enabled applications that employees use within an organization without formal approval from IT, legal, risk management, or governance functions.
References
- COMPEL Framework — COMPEL Domain D7 — MLOps Maturity Rubric (Methodology)
- Google — MLOps: Continuous delivery and automation pipelines in machine learning (Technical Guide)
Frequently asked questions
What is the difference between MLOps and DevOps?
DevOps automates software deployment. MLOps extends DevOps principles to machine learning, adding data pipeline management, experiment tracking, model versioning, and production monitoring for model performance and drift. MLOps must handle the additional complexity of data dependencies, model retraining, and governance checkpoints that do not exist in traditional software.
What MLOps maturity level should organizations target?
For most enterprises, Level 3 (Defined) is a practical first target: standardized deployment processes, automated monitoring, and documented governance integration. Level 5 (Optimizing) — fully automated ML pipelines with governance checkpoints in CI/CD — is appropriate for organizations with large model portfolios and high regulatory exposure.
Related articles in the Body of Knowledge
Related Terms
Other glossary terms mentioned in this entry's definition and context.