Preparation: Before You Begin
An EIA is only as good as its organisational mandate. Before initiating the assessment, practitioners must secure three things:
Executive sponsorship. The EIA must be authorised by a senior leader with the authority to act on its findings — including the authority to delay or block deployment if the assessment identifies unacceptable risks. An EIA without organisational power behind it is an exercise in documentation, not governance.
Cross-functional team. No single discipline can conduct an EIA alone. The assessment team should include: a governance or ethics lead (facilitator), a technical representative who understands the system’s architecture and data pipeline, a legal or compliance representative, a representative of the business function deploying the system, and — critically — an external perspective (community representative, domain expert, or independent ethics advisor).
Timing alignment. The EIA must be initiated early enough in the development lifecycle that its findings can influence design decisions. An EIA started the week before a planned launch is performative. Best practice is to begin the scoping step at the same time as the system’s design phase.
Step 1: Define Scope and Context (Weeks 1–2)
The scoping step establishes the boundaries of the assessment. It answers: What are we assessing, why, and under what constraints?
Practical activities:
Gather the system’s design documentation, business case, data inventory, and any prior assessments (DPIA, security risk assessment, model evaluation reports). If these documents do not exist, their absence is itself a finding.
Conduct a scoping workshop (2–4 hours) with the assessment team. Walk through the system’s intended operation from input to output, identifying: what data the system consumes, what decisions or actions it influences, who uses it, who is affected by it, and what the consequences of errors are.
Document the regulatory landscape. Which AI-specific regulations apply? Which data protection laws? Which sector-specific rules? Use the COMPEL jurisdiction governance profiles to identify applicable requirements.
Produce a preliminary risk tier classification using the proportionality criteria: is this a minimal, standard, or comprehensive assessment? Document the rationale.
Common pitfalls at this step: Scope creep (trying to assess the entire AI programme rather than a specific system), under-scoping (limiting the assessment to the technical model without considering the socio-technical system), and failure to identify the regulatory landscape before proceeding.
Output: A scoping statement (2–5 pages) with system boundaries, context register, regulatory applicability matrix, and preliminary risk tier classification.
Step 2: Identify Affected Communities (Weeks 2–5)
This is the step most often done poorly — because it requires looking beyond the obvious users to find the people who bear the system’s risks without choosing to interact with it.
Practical activities:
Start with the direct users: who interacts with the system through its interface? Then trace outward: who is the subject of the system’s decisions or recommendations, even if they never see the interface? Then further: who experiences secondary effects — labour market impacts, environmental effects, shifts in social norms?
Apply the affected community categories framework to ensure comprehensive coverage: direct users, decision subjects, indirectly affected populations, vulnerable and marginalised groups, domain experts, civil society, and regulatory bodies.
For each identified community, assess: what is their relationship to the system (benefit, risk, or both)? What power do they have to influence how the system affects them? What barriers exist to their participation in the assessment process?
Conduct a vulnerability assessment. Are any affected groups characterised by historical marginalisation, reduced digital literacy, economic precarity, age-related vulnerability, disability, or language barriers? These groups require proactive outreach and adapted engagement methods.
Output: An affected community register categorising each group as direct, indirect, or marginalised, with power-asymmetry analysis and vulnerability assessment.
Step 3: Map Ethical Impacts (Weeks 4–7)
With the scope defined and communities identified, systematically map the potential ethical impacts across the UNESCO ethical dimensions.
Practical activities:
For each affected community identified in Step 2, and for each ethical dimension (human rights, fairness, transparency, safety, accountability, privacy, environmental sustainability), ask: What could go right? What could go wrong? Who benefits? Who bears the risk?
Use impact identification techniques: structured brainstorming with the cross-functional team, domain expert interviews, analogous system analysis (what happened when similar systems were deployed elsewhere?), and red-teaming (deliberately attempting to identify harmful use cases or failure modes).
Map cascading impacts. A hiring AI that disadvantages women does not merely cause a fairness harm — it cascades into economic harm (lower earnings), representational harm (reinforcing workplace gender imbalance), and autonomy harm (narrowing women’s career options). Map these chains.
Produce an impact heat map visualising severity versus likelihood for each identified impact. This helps prioritise assessment effort and communicates risk to stakeholders.
Output: Impact inventory (structured list of potential impacts per ethical dimension and community), impact heat map, and cascading impact chains.
Step 4: Assess Proportionality and Necessity (Weeks 6–8)
This step asks the most fundamental question: Should this AI system exist in its current form?
Practical activities:
Apply the proportionality principle from human rights law: Is the AI system a proportionate response to the problem it addresses? Could the same outcome be achieved with less intrusive means (simpler technology, human decision-making, rule-based system)?
Conduct an alternatives analysis. Document at least three alternatives to the proposed AI system, including the status quo (no AI), a simpler technological approach, and a human-centred approach. Compare their effectiveness, cost, risk profile, and ethical impact.
Assess data minimisation. Is the system collecting and processing the minimum data necessary? Could the same functionality be achieved with less personal data, aggregated data, or synthetic data?
Evaluate the benefit distribution. Do the people who benefit from the system overlap with the people who bear its risks? If the organisation captures the value while a community bears the risk, the proportionality analysis must account for this asymmetry.
Output: Proportionality assessment report, alternatives analysis matrix, necessity justification, and data minimisation audit.
Step 5: Conduct Stakeholder Consultation (Weeks 7–13)
Stakeholder consultation is the ethical heart of the EIA. It is where the organisation subjects its analysis to the scrutiny of those who are actually affected. This is not a survey — it is genuine engagement with the capacity to change the system.
Practical activities:
Select consultation methods appropriate to each stakeholder group. Use the COMPEL consultation methods catalogue: deliberative dialogue for complex trade-offs, participatory design workshops for co-design opportunities, structured interviews for sensitive contexts, public comment for broad reach, and community advisory panels for ongoing engagement.
Prepare accessible information materials. Technical documentation is not accessible to most stakeholders. Produce plain-language summaries of the system, the identified impacts, and the proposed mitigations — in the languages spoken by affected communities.
Conduct the consultation. Ensure facilitators are culturally competent and independent of the project team. Compensate participants for their time, particularly marginalised groups for whom participation has an opportunity cost. Document not just what stakeholders said, but whether they felt the process was fair and whether they believe their input will be acted upon.
Track how stakeholder input changes the system. Maintain a design change log that traces every modification to stakeholder feedback. Where stakeholder recommendations are not adopted, document the reasoning and communicate it back to stakeholders.
Output: Consultation report with themes, concerns, and recommendations; design change log; dissent register; participant feedback on process quality.
Step 6: Evaluate Alternatives and Mitigations (Weeks 12–16)
For each negative impact identified and confirmed through consultation, design specific mitigations.
Practical activities:
For each impact, document at least one proposed mitigation. For each mitigation, assess: What evidence exists that it will be effective? What is the cost? Who is responsible for implementing it? How will its effectiveness be measured?
Produce a residual risk register. Some impacts cannot be fully mitigated. The residual risk register documents these remaining risks and the rationale for accepting them. Critical and high-severity residual risks should trigger escalation to senior governance.
Make a go/no-go recommendation. Based on the full assessment — impacts, consultation feedback, mitigations, and residual risks — the assessment team recommends: proceed as designed, proceed with conditions (specific mitigations, monitoring requirements, or scope limitations), redesign required, or do not deploy.
Output: Mitigation plan with owners and timelines, residual risk register, go/no-go recommendation.
Step 7: Document and Publish (Weeks 15–17)
Compile the assessment into a format that serves multiple audiences.
Practical activities:
Produce the full EIA report (internal, detailed). This is the primary reference document, containing all evidence, analysis, and decisions.
Produce a public summary (for high-risk systems or where transparency is expected). This should be accessible to non-technical audiences and honest about residual risks.
Produce an executive summary for the governance committee or board.
Link the EIA to the decision record — the formal organisational decision to deploy, conditionally deploy, redesign, or not deploy the system.
Output: Full EIA report, public summary, executive summary, linked decision record.
Step 8: Monitor, Review, and Iterate (Ongoing)
An EIA is not complete when the report is published. Ongoing monitoring ensures that the assessment remains valid as the system operates in the real world.
Practical activities:
Define monitoring metrics that correspond to the ethical impacts identified in the assessment. Ensure these metrics are tracked in production dashboards.
Define re-assessment triggers: what events should prompt a fresh EIA or an update to the existing one? Examples include: model retraining on new data, expansion to new user populations or jurisdictions, significant change in error rates or fairness metrics, regulatory change, or stakeholder complaint.
Schedule periodic reviews — quarterly for high-risk systems, annually for standard-risk systems.
Maintain feedback channels so that affected communities can report concerns post-deployment. Ensure these channels are accessible and that reported concerns are triaged and acted upon.
Output: Monitoring dashboard, re-assessment trigger log, periodic review reports, feedback channel records.
Integrating EIA with the AI Development Lifecycle
The EIA should not be a standalone process that runs in parallel to development. Integration points include:
- Requirements phase: Scoping (Step 1) and community identification (Step 2) should inform requirements, including non-functional requirements for fairness, transparency, and safety.
- Design phase: Proportionality (Step 4) and alternatives analysis should influence architecture decisions.
- Development phase: Mitigation implementation (Step 6) should be tracked alongside technical implementation.
- Testing phase: Impact heat map priorities should inform test case design, including adversarial testing and subgroup performance evaluation.
- Deployment phase: Go/no-go recommendation (Step 6) should be a formal gate in the deployment pipeline.
- Operations phase: Monitoring (Step 8) should integrate with existing operational monitoring and incident management.
This article is part of the COMPEL Body of Knowledge v2.5 and supports the AI Transformation Practitioner (AITP) certification.