Skip to main content

COMPEL Glossary / trustworthy-ai

Trustworthy AI

Trustworthy AI describes AI systems that are lawful (complying with all applicable regulations), ethical (adhering to moral principles and values), and robust (technically reliable, safe, and secure).

What this means in practice

The concept, articulated in the EU's Ethics Guidelines for Trustworthy AI, encompasses seven key requirements: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity and non-discrimination, societal and environmental well-being, and accountability. For organizations, building trustworthy AI is both an ethical obligation and a business imperative as stakeholder trust becomes a competitive differentiator. In COMPEL, trustworthy AI principles are integrated across all four pillars and inform the governance architecture designed during the Model stage.

Related Terms

Other glossary terms mentioned in this entry's definition and context.