This article introduces the foundations of EIA methodology, explains why EIA is essential for responsible AI governance, and provides practitioners with the conceptual framework they need before undertaking a full assessment at more advanced certification levels.
Why Ethical Impact Assessment Matters
The history of technology deployment is littered with examples of systems that worked as designed but caused unanticipated harm. The Dutch childcare benefits scandal, in which an automated fraud detection system wrongly accused thousands of families, destroyed livelihoods not because the algorithm failed technically but because no one systematically asked: Who could this system harm, and how?
Ethical Impact Assessment exists to ask that question rigorously, systematically, and early enough to change the answer.
EIA is distinct from other assessment types in critical ways. A Data Protection Impact Assessment (DPIA) focuses on data privacy rights. A cybersecurity risk assessment evaluates threats to confidentiality, integrity, and availability. A bias audit examines statistical fairness metrics. An EIA encompasses all of these and more — it considers the full spectrum of ethical dimensions: human rights, fairness, transparency, accountability, safety, autonomy, environmental sustainability, and societal impact.
The UNESCO Recommendation on the Ethics of Artificial Intelligence (2021) — adopted by 193 Member States — calls for ethical impact assessment as a core governance mechanism. The OECD AI Principles (2019) emphasise the need for proactive assessment of AI risks. The EU AI Act (2024) mandates fundamental rights impact assessments for high-risk AI systems deployed by public authorities. The trend is clear: EIA is evolving from a voluntary best practice to a regulatory requirement.
The Foundations of EIA Thinking
Beyond Technical Risk
Technical risk assessments ask: What could go wrong with the system? Ethical impact assessments ask a fundamentally different question: Who could be harmed by this system, and is that harm justified?
This shift in perspective — from system-centric to human-centric — is the defining characteristic of EIA. It requires practitioners to step outside the engineering mindset and consider the AI system from the perspective of the people it affects, particularly those who have the least power to influence its design and deployment.
Consider a predictive policing system. A technical risk assessment might evaluate model accuracy, data quality, and system availability. An EIA would ask: Which communities will experience increased police presence as a result of this system? Are those communities disproportionately from minority backgrounds? What is the historical relationship between those communities and law enforcement? Will the system amplify existing patterns of over-policing? Do the affected communities have any voice in whether or how the system is deployed?
These are not technical questions. They are ethical, social, and political questions. But they are questions that must be answered before the system is deployed, not after harm has occurred.
Proportionality: Scaling Assessment to Risk
Not every AI system requires the same depth of ethical scrutiny. A spelling correction algorithm and an autonomous weapons system clearly warrant different levels of assessment. The principle of proportionality ensures that the assessment effort matches the ethical risk profile of the system.
Proportionality operates on three levels:
Minimal assessment applies to AI systems with low consequence, narrow scope, and no processing of sensitive data. A brief scoping exercise and documentation of key ethical considerations is sufficient.
Standard assessment applies to systems that process personal data, influence decisions about individuals, or operate in regulated sectors. The full EIA process — scoping, community identification, impact mapping, stakeholder consultation, mitigation design, documentation, and monitoring — is required.
Comprehensive assessment applies to high-risk systems: those classified as high-risk under the EU AI Act, those making automated decisions with significant legal effects, those deployed at scale in critical domains, or those affecting vulnerable populations. Comprehensive assessment adds independent external review, public consultation, and ongoing monitoring programmes.
The proportionality determination is itself a governance decision that should be documented and reviewable. Getting it wrong — under-assessing a high-risk system or over-assessing a minimal-risk one — undermines the credibility and efficiency of the entire governance programme.
The Eight-Step Process
The EIA methodology, aligned with the UNESCO Recommendation and the IEEE 7010 standard for Wellbeing Impact Assessment, follows an eight-step process:
-
Define Scope and Context — Establish what the AI system does, for whom, and under what regulatory constraints.
-
Identify Affected Communities — Map every group that may experience positive or negative effects, with particular attention to vulnerable and marginalised populations.
-
Map Ethical Impacts — Systematically identify potential impacts across ethical dimensions: human rights, fairness, transparency, safety, accountability, privacy, and environmental sustainability.
-
Assess Proportionality and Necessity — Evaluate whether the AI system is a proportionate response to the problem, whether less intrusive alternatives exist, and whether the benefits justify the risks.
-
Conduct Stakeholder Consultation — Engage affected communities in genuine dialogue about the identified impacts and proposed mitigations. This is not a notification exercise — it must have the power to change system design.
-
Evaluate Alternatives and Mitigations — Design and evaluate specific mitigation measures for each negative impact. Where mitigations are insufficient, evaluate system redesign or non-deployment.
-
Document and Publish Findings — Compile the assessment into a transparent, accessible report linked to the decision record.
-
Monitor, Review, and Iterate — Establish ongoing monitoring and define triggers for re-assessment.
At the foundations level, practitioners need to understand the purpose and flow of this process. Detailed guidance on executing each step is provided at the practitioner level (Module 2.3).
Ethical Dimensions for Assessment
The EIA process evaluates impacts across multiple ethical dimensions. At the foundations level, practitioners should understand what each dimension covers:
Human Rights. Does the system affect fundamental rights such as privacy, freedom of expression, non-discrimination, due process, or the right to an effective remedy? The Universal Declaration of Human Rights and regional human rights instruments provide the reference framework.
Fairness. Does the system produce outcomes that are systematically different for groups defined by protected characteristics? Fairness is not a single metric — multiple definitions exist (demographic parity, equalized odds, calibration, individual fairness), and choosing among them is a value judgment, not a technical one.
Transparency. Can affected individuals understand that AI is involved in decisions about them, what the system does, and how to challenge its outputs? Transparency operates at multiple levels: existence (knowing AI is used), logic (understanding how it works), and recourse (knowing how to contest decisions).
Safety. Could the system cause physical or psychological harm? Safety assessment considers both normal operation and failure modes, including adversarial attacks, distribution shift, and edge cases not represented in training data.
Accountability. Is there a clear chain of responsibility for the system’s ethical performance? Can an individual, team, or governance body be held accountable when things go wrong?
Privacy. Does the system process personal data appropriately, with adequate legal basis, purpose limitation, and data minimisation? Does it infer sensitive information that individuals did not knowingly disclose?
Environmental Sustainability. What is the environmental cost of developing and operating the system — energy consumption, carbon emissions, water usage, electronic waste? Is that cost proportionate to the value delivered?
Common Pitfalls in EIA Practice
Even organisations that commit to EIA can undermine its effectiveness through common pitfalls:
Assessment as Rubber-Stamping. If the EIA is conducted after all design decisions are made and deployment is already scheduled, it becomes a compliance exercise rather than a genuine evaluation. EIA must begin early enough to change the system.
Consultation as Notification. Sending a survey to stakeholders is not consultation. Genuine consultation involves two-way dialogue, accessible information, adequate time, and — critically — the real possibility that stakeholder input will change the outcome.
Scope Too Narrow. Focusing only on the direct users of the AI system and ignoring indirect effects, cascading impacts, and systemically affected communities produces an incomplete assessment.
Ethics Washing. Publishing an impressive-looking EIA report while ignoring its findings is worse than not conducting one at all. If the organisation is not prepared to act on the assessment’s conclusions — including the possibility that the system should not be deployed — the EIA is performative.
One-Time Assessment. An EIA conducted at deployment and never revisited becomes stale as the system, its usage patterns, and its environment change. Monitoring and periodic re-assessment are essential.
The Relationship Between EIA and Other Assessments
EIA does not replace other assessment types — it integrates with them:
-
DPIA/PIA (Data Protection Impact Assessment) focuses specifically on data privacy risks and is mandated by GDPR Article 35. The EIA’s privacy dimension draws on DPIA findings but considers broader privacy implications.
-
Algorithmic Impact Assessment (AIA) focuses on the impact of automated decision-making systems. Canada’s Directive on Automated Decision-Making mandates AIAs for federal government AI. The EIA encompasses AIA scope but extends to non-decision-making systems and non-algorithmic ethical dimensions.
-
Fundamental Rights Impact Assessment (FRIA) is required by the EU AI Act for high-risk systems deployed by public authorities. The FRIA is essentially the human rights dimension of a full EIA.
-
Bias Audit is a narrower, technically focused assessment of statistical fairness metrics. It informs the fairness dimension of the EIA but does not address qualitative fairness concerns, structural discrimination, or fairness definition choices.
A mature governance programme integrates these assessment types so that evidence gathered for one informs others, reducing duplication while ensuring comprehensive coverage.
From Principles to Practice
The journey from ethical principles to operational ethical governance runs through Ethical Impact Assessment. Principles tell us what we value. EIA tells us whether our AI systems are consistent with those values — and what to change when they are not.
At the foundations level, the key takeaways are:
- EIA is a structured, evidence-based process, not a subjective opinion exercise
- It must be proportionate to the risk profile of the AI system
- It must start early enough to influence design decisions
- It must centre the perspectives of affected communities, not just the deploying organisation
- It must be documented, transparent, and linked to decision-making authority
- It must be living — monitored, reviewed, and updated throughout the system’s lifecycle
Subsequent articles in this series will deepen each of these themes. Module 2.3 provides detailed practitioner guidance on executing each step of the EIA process. Module 3.5 introduces advanced fairness metrics, ethics pre-mortem analysis, and ethics incident learning systems. Module 4.4 addresses the strategic governance of ethics at the enterprise level.
This article is part of the COMPEL Body of Knowledge v2.5 and supports the AI Transformation Foundations (AITF) certification.