Skip to main content
AITP M2.3-Art12 v1.0 Reviewed 2026-04-06 Open Access
M2.3 Transformation Roadmap Architecture
AITP · Practitioner

Affected Community Engagement

Affected Community Engagement — Transformation Design & Program Architecture — Applied depth — COMPEL Body of Knowledge.

9 min read Article 12 of 13 Organize

This article provides practitioners with actionable guidance on who to engage, how to engage them, and how to ensure that engagement translates into genuine influence over AI governance decisions.

The Engagement Imperative

The case for community engagement is both ethical and practical. Ethically, deploying a system that affects people’s lives without consulting them is a form of governance failure regardless of how technically sound the system may be. Practically, affected communities possess knowledge that the deploying organisation lacks: they understand how decisions play out in their lives, what secondary effects matter, and what trust deficits exist from prior experiences with similar systems.

Consider a welfare benefits automation system. The technical team understands the algorithm. The policy team understands the eligibility rules. But only benefits recipients understand: the terror of receiving an automated denial letter with no explanation, the impossibility of navigating an online appeal process without digital literacy, the cascading impact of a benefits delay on rent, food, and childcare. Without these perspectives, the governance assessment is incomplete — and the system will cause harms that its designers never anticipated.

Who to Engage: The Seven Community Categories

Affected communities extend far beyond the system’s direct users. The COMPEL framework identifies seven categories:

1. Direct Users

The individuals who interact with the AI system through its primary interface. They are the most visible stakeholders but not necessarily the most affected. A hiring manager who uses an AI screening tool is a direct user; the job applicants screened by the tool bear greater risk.

Identification: User research data, product requirements, customer journey maps.

Engagement priority: High for usability and transparency. Medium for ethical impact — direct users may benefit from the system and underestimate its impact on others.

2. Decision Subjects

The individuals about whom the AI system makes or influences decisions. They may never see the system’s interface. A person denied a loan by an AI credit scoring model is a decision subject. A patient whose treatment is influenced by a clinical decision support system is a decision subject.

Identification: Trace the decision chain from AI output to action. Who is at the receiving end of decisions informed by this system?

Engagement priority: Critical. Decision subjects bear the consequences of the system’s outputs and often have the least power to influence its design.

3. Indirectly Affected Populations

Groups who experience secondary effects. Workers in industries disrupted by AI automation. Communities near data centres. Content moderators exposed to harmful material in AI training pipelines.

Identification: Socio-economic impact modelling, supply chain analysis, analogous technology research.

Engagement priority: Medium to high, depending on the severity and scale of indirect effects.

4. Vulnerable and Marginalised Groups

Populations at elevated risk: people with disabilities, elderly individuals, children, refugees, indigenous communities, people experiencing homelessness, people with low digital literacy, ethnic and racial minorities, LGBTQ+ individuals. These groups face compounding disadvantage when AI systems are designed without their input.

Identification: Protected characteristic analysis, intersectional analysis, engagement with advocacy organisations. The key question is: Who is most likely to be harmed and least likely to be heard?

Engagement priority: Critical. The “Nothing About Us Without Us” principle applies: representatives must be members of the affected groups, not proxies.

5. Domain Experts

Professionals with deep expertise in the domain where the AI operates: clinicians, teachers, social workers, judges, financial advisors. Their domain knowledge is essential for evaluating whether the AI system’s logic is sound and its outputs are appropriate.

Identification: Professional associations, academic institutions, frontline practitioners.

Engagement priority: High for system design and evaluation. Their technical credibility complements community experiential knowledge.

6. Civil Society and Advocacy Organisations

Non-governmental organisations, community groups, trade unions, and advocacy bodies that represent affected populations. They serve as amplifiers for voices that might be excluded and as accountability mechanisms.

Identification: AI ethics organisations, digital rights groups, community-based organisations, consumer protection bodies.

Engagement priority: High for systemic analysis and sustained accountability.

7. Regulatory and Oversight Bodies

Government agencies and standards bodies with jurisdiction over the system. Early engagement enables regulatory alignment.

Identification: Regulatory mapping by jurisdiction and sector.

Engagement priority: Medium to high, depending on the regulatory environment.

How to Engage: Choosing the Right Methods

Different stakeholder groups require different engagement methods. The choice depends on: the complexity of the issues, the accessibility needs of participants, the stage of system development, and the resources available.

For Complex Ethical Trade-offs: Deliberative Dialogue

When multiple reasonable positions exist and the ethical implications are genuinely uncertain, deliberative dialogue brings diverse stakeholders together for structured discussion. Participants receive balanced information, engage in moderated dialogue, and produce collective recommendations.

Practical guidance: Recruit 15–30 participants representing diverse perspectives. Provide pre-reading materials in accessible language. Use professional facilitators independent of the project team. Allocate 1–3 days. Compensate all participants.

For System Design Influence: Participatory Design Workshops

When you want affected communities to actively shape the system’s features, safeguards, or interaction patterns. Participants work alongside designers as co-creators.

Practical guidance: Use low-fidelity prototypes (paper sketches, click-through mockups) to make the system tangible without requiring technical literacy. Run 2–4 half-day sessions. Follow up to show participants how their input was incorporated.

For Sensitive Contexts: Structured Interviews

When participants may be reluctant to share in group settings — due to stigma, trauma, power imbalance, or privacy concerns — individual interviews provide a safe space for deeper exploration.

Practical guidance: Develop a semi-structured protocol reviewed by ethics oversight. Use culturally competent interviewers. Ensure confidentiality. 15–30 interviews typically provide thematic saturation. Compensate participants.

For Broad Reach: Public Comment Periods

When the affected population is large and dispersed, an open comment period captures views from stakeholders not included in targeted engagements.

Practical guidance: Publish a plain-language summary of the AI system and its impacts. Provide an accessible online submission platform. Allow 30–90 days. Actively promote the comment period through channels that reach affected communities — not just the organisation’s website.

For Ongoing Accountability: Community Advisory Panels

When the system will be deployed for an extended period and ongoing input is needed — not just pre-deployment sign-off.

Practical guidance: Recruit 10–15 panel members through an open application process with attention to representation. Meet quarterly. Provide compensation. Define the panel’s authority in a governance charter. Refresh membership annually to prevent capture.

Overcoming Common Engagement Challenges

”We cannot find affected communities”

This usually means the search has been too narrow. Partner with community organisations who have existing trust relationships. Engage advocacy groups as intermediaries. Look beyond digital channels — some affected populations are not online.

”Community members lack technical understanding to provide useful input”

The responsibility to make the system understandable falls on the deploying organisation, not on the community. Invest in accessible communication. Use analogies, visual aids, and real-world scenarios. The most valuable community input is about values, experiences, and consequences — not technical architecture.

”We do not have the budget for engagement”

If the organisation has the budget to develop and deploy an AI system but not to consult the people it affects, that is a governance failure, not a resource constraint. At minimum, partner with existing community organisations rather than building engagement infrastructure from scratch. Use digital tools to reduce logistical costs while maintaining engagement quality.

”Stakeholder feedback conflicts with our technical approach”

This is not a bug — it is the point of engagement. When affected communities raise concerns that conflict with the current design, the organisation must evaluate whether the technical approach should change. Documenting and dismissing stakeholder concerns without genuine consideration undermines the entire process.

”We engaged stakeholders but nothing changed”

Track every piece of stakeholder input through to its resolution. Maintain a design change log showing: what was raised, by whom, what action was taken, and the rationale for that action. Share this log with stakeholders. If nothing changed, explain why — honestly and specifically.

Measuring Engagement Quality

Effective engagement is not measured by the number of participants but by the quality of the process and its influence on outcomes:

  • Representativeness: Did participants reflect the diversity of affected communities, including hard-to-reach and marginalised groups?
  • Accessibility: Were materials, venues, and processes accessible to all participants, including those with disabilities, language barriers, and low digital literacy?
  • Influence: Did stakeholder input demonstrably change system design, deployment conditions, or governance arrangements? Can the change log prove it?
  • Satisfaction: Did participants feel the process was fair, that their input was valued, and that they understood how their contributions would be used?
  • Sustained access: Do ongoing feedback channels exist for affected individuals to raise concerns after deployment?

The Ethical Obligation

Community engagement is not a box to check. It is the mechanism through which organisations demonstrate respect for the agency and dignity of the people their AI systems affect. An AI system deployed without meaningful community engagement — regardless of its technical sophistication — carries an ethical deficit that no amount of post-hoc monitoring can remedy.


This article is part of the COMPEL Body of Knowledge v2.5 and supports the AI Transformation Practitioner (AITP) certification.