Skip to main content

COMPEL Glossary / quality-assurance-qa

Quality Assurance (QA)

Quality assurance for AI extends traditional software testing with model-specific validation processes to ensure AI systems meet defined standards for performance, reliability, fairness, and governance compliance.

What this means in practice

AI QA encompasses unit testing (individual component verification), integration testing (system interconnection verification), model validation (performance against acceptance thresholds), fairness testing (bias detection across protected groups), adversarial testing (robustness against malicious inputs), user acceptance testing (end-user validation), and governance compliance verification (adherence to policies and regulations). AI QA is more complex than traditional software QA because AI systems are probabilistic (outputs vary), data-dependent (performance changes with data), and potentially opaque (reasoning may not be transparent). In the COMPEL Produce stage, QA activities are integrated into every sprint alongside development work.

Related Terms

Other glossary terms mentioned in this entry's definition and context.