Skip to main content

COMPEL Glossary / explainability

Explainability

Explainability is the degree to which an AI system's decision-making process can be understood and communicated to humans.

What this means in practice

Traditional software operates on explicit logic traceable line by line, but ML models -- particularly deep learning -- produce outputs through mathematical transformations that resist straightforward interpretation (the 'black box' problem). Explainability techniques include feature importance scores (which inputs most influenced the output), attention visualization (what parts of the input the model focused on), and counterfactual explanations (what would need to change for a different outcome). When an AI system denies a loan or recommends a medical treatment, the organization must be able to explain why. Regulatory frameworks increasingly require it. Explainability is assessed in COMPEL's Governance pillar and is a key consideration in the ethical review process during the Model stage.

Why it matters

When an AI system denies a loan, recommends a medical treatment, or flags a transaction, the organization must be able to explain why. Regulatory frameworks increasingly require explainability, and individuals affected by AI decisions have growing legal rights to meaningful explanations. Organizations that deploy opaque AI systems face regulatory penalties, legal challenges, and erosion of stakeholder trust.

How COMPEL uses it

Explainability is assessed in COMPEL's Governance pillar and is a key consideration in the ethical review process during the Model stage. During Calibrate, explainability requirements are mapped based on use case risk levels and regulatory obligations. The Produce stage implements explainability techniques appropriate to each model type. The Evaluate stage tests whether explanations are meaningful to their intended audience.

Related Terms

Other glossary terms mentioned in this entry's definition and context.