COMPEL Glossary / precision
Precision
Precision is a model performance metric measuring the proportion of positive predictions that are actually correct -- in other words, when the model says 'yes,' how often is it right? High precision means few false positives (false alarms).
What this means in practice
Precision is particularly important in applications where false positives are costly: a fraud detection system with low precision generates excessive false alerts that overwhelm investigation teams, a spam filter with low precision blocks legitimate emails, or a medical screening tool with low precision causes unnecessary anxiety and follow-up procedures. Precision is typically evaluated alongside recall, as improving one often comes at the expense of the other. Understanding this tradeoff helps transformation leaders set appropriate performance thresholds during the COMPEL Model stage.
Why it matters
Precision determines how much an organization can trust positive predictions from its AI systems. Low precision means excessive false alarms that overwhelm investigation teams, block legitimate transactions, or cause unnecessary anxiety in screening applications. Setting appropriate precision thresholds based on the business cost of false positives is essential for deploying AI systems that deliver value rather than creating noise.
How COMPEL uses it
During the Model stage, precision thresholds are defined as part of use case success criteria and performance acceptance criteria. The Evaluate stage measures precision alongside recall and other metrics to ensure holistic model assessment. COMPEL requires that the precision-recall tradeoff be explicitly documented and approved by business stakeholders, not decided unilaterally by technical teams.
Related Terms
Other glossary terms mentioned in this entry's definition and context.