COMPEL Glossary / disparate-impact
Disparate Impact
Disparate impact occurs when an AI system's decisions disproportionately and negatively affect a particular demographic group even though the system does not explicitly use protected characteristics such as race, gender, or age as input variables.
What this means in practice
This happens because the model may rely on proxy variables that correlate with protected attributes. For organizations, disparate impact is both a legal liability under anti-discrimination laws and an ethical concern that can damage trust and reputation. Detecting and measuring disparate impact requires comparing outcome rates across demographic groups and establishing whether observed differences exceed legally or ethically acceptable thresholds. In COMPEL, disparate impact analysis is part of the fairness evaluation conducted during the Evaluate stage and is a key component of the algorithmic impact assessment framework in Module 3.4.
Why it matters
Disparate impact represents both legal liability under anti-discrimination laws and ethical concern that can damage stakeholder trust. AI systems can produce discriminatory outcomes even without explicitly using protected characteristics, because proxy variables silently correlate with protected attributes. Organizations that fail to test for disparate impact risk regulatory enforcement, lawsuits, and reputational harm that can undermine entire AI programs.
How COMPEL uses it
Disparate impact analysis is part of the fairness evaluation conducted during the Evaluate stage, where outcome rates are compared across demographic groups against established thresholds. During Model, disparate impact testing is designed into the AI system evaluation plan. The Governance pillar requires documented disparate impact assessments as part of the algorithmic impact assessment framework in Module 3.4.
Related Terms
Other glossary terms mentioned in this entry's definition and context.