COMPEL Glossary / a-b-testing
A/B Testing
A/B testing is a controlled experiment that compares two versions of an AI model, interface, or process by exposing each to a different group of users and measuring which performs better against predefined metrics.
What this means in practice
In AI transformation, A/B testing is critical for making evidence-based decisions about model improvements, user experience changes, and process modifications rather than relying on intuition or theoretical analysis. For example, an organization might run an A/B test comparing a new fraud detection model against the existing one on a subset of transactions to verify that the new model actually catches more fraud without increasing false positives. Within the COMPEL framework, A/B testing is a key practice during the Evaluate stage, providing empirical evidence for the measurement and value realization activities described in Module 2.5.
Why it matters
Decisions about AI model improvements made on intuition rather than evidence frequently lead to wasted resources and degraded performance. A/B testing provides the empirical rigor to validate that changes actually deliver measurable improvement before full rollout. For organizations operating AI at scale, even small improvements in model accuracy or user experience can translate into significant revenue or cost impact.
How COMPEL uses it
A/B testing is a core practice within the Evaluate stage, providing empirical evidence for measurement and value realization activities. During Produce, teams implement A/B testing infrastructure as part of the Technology pillar's deployment patterns. Results from A/B tests feed directly into the Learn stage's KPI monitoring and continuous improvement processes, ensuring that every model change is justified by data rather than assumption.
Related Terms
Other glossary terms mentioned in this entry's definition and context.