Skip to main content
AITGP M9.1-Art02 v1.0 Reviewed 2026-04-06 Open Access
M9.1 M9.1
AITGP · Governance Professional

IEEE 7000 Ethical Design Implementation: A 10-Step Value-Based System Design Process

IEEE 7000 Ethical Design Implementation: A 10-Step Value-Based System Design Process — Transformation Design & Program Architecture — Advanced depth — COMPEL Body of Knowledge.

17 min read Article 2 of 4

COMPEL Body of Knowledge — Regulatory Bridge Series Cluster A Companion Article — Ethical Design Implementation


Why IEEE 7000 matters {#why}

Most AI governance programs succeed at documenting what a system does and how well it performs. Far fewer can produce auditable evidence of why the system was built the way it was — what values shaped its scope, which trade-offs were weighed against whose interests, and how those decisions were translated into concrete engineering requirements. That gap is the space IEEE 7000-2021 — Standard Model Process for Addressing Ethical Concerns During System Design is designed to close.

IEEE 7000 was ratified in September 2021 as the first standard to define a repeatable process for integrating ethical considerations into the systems engineering lifecycle. It does not dictate which values an organization must prioritize; instead it prescribes a process for surfacing the values that matter to stakeholders, translating them into engineering artifacts, and retaining the evidence trail so that later reviewers — auditors, regulators, affected communities — can reconstruct the reasoning.

Running IEEE 7000 solves three problems that purely outcome-based frameworks leave open:

  1. The values-to-requirements gap. Frameworks like NIST AI RMF and ISO 42001 name trustworthy characteristics (fairness, transparency, accountability) but stop short of specifying how those abstractions become testable requirements. IEEE 7000 closes the gap through its Ethical Value Requirements (EVRs) construct.
  2. The affected-community blind spot. NIST AI RMF GOVERN 5 and ISO 42001 clause 4.2 both require interested-party engagement but leave the how to the implementer. IEEE 7000 provides elicitation techniques — workshops, surveys, ethnography, adversarial analysis — that generate defensible stakeholder coverage.
  3. The design-rationale audit trail. Regulators increasingly ask “show me the record of the decision” rather than “show me the outcome.” IEEE 7000’s transparency and accountability management steps generate that record by design.

Read the standard alongside the NIST AI RMF to ISO 42001 crosswalk: the crosswalk tells you what evidence to collect, IEEE 7000 tells you how to generate that evidence from stakeholder values rather than from abstract principles.

The 10-step process {#process}

IEEE 7000 organizes ethical design into ten interacting steps. In practice they overlap rather than run strictly sequentially — early steps iterate as later steps surface new information.

#StepPrimary output
1Concept ExplorationProblem statement with ethical framing
2System-of-Interest AnalysisStakeholder register and system boundary
3Ethical Values ElicitationPrioritized value list per stakeholder group
4Ethical Requirements DefinitionEthical Value Requirements (EVRs)
5Risk-Based DesignEVR-driven design decisions and trade-off log
6Transparency ManagementDisclosure plan and rationale record
7Accountability ManagementRole assignment and escalation protocol
8Ethical Operational IntegrationRuntime monitors and control hooks
9Risk ReviewPeriodic reassessment of EVRs against reality
10Continuous ImprovementLessons learned and standard update proposals

1. Concept Exploration

The system is described in ethical terms before it is described in technical terms. Teams state the problem the system intends to solve, the population it will affect, the ethical tensions the problem inherently carries (for example, accuracy versus privacy, accessibility versus security), and any red-line exclusions. A two-page ethical concept brief is the typical artifact. Good briefs answer: who benefits, who is exposed to harm, who decides, and who is silent?

2. System-of-Interest Analysis

The team maps the system boundary, adjacent systems, data flows, and — crucially — all stakeholders, not only the buyers. IEEE 7000 distinguishes direct users, indirect users, affected non-users, decision-makers, and operators. Each category matters because values differ sharply across them. A fraud-detection system’s direct users (bank analysts) value precision; its affected non-users (customers denied credit) value recourse and explainability. The output is a stakeholder register with demographic, power, and exposure attributes.

3. Ethical Values Elicitation

For each stakeholder group the team elicits the values relevant to this system. Values are not invented — they are surfaced through structured techniques described in the value elicitation techniques section. The output is a prioritized value list per group, with disagreement explicitly preserved rather than averaged away. For a loan-decisioning system, customers typically prioritize fairness and recourse; regulators prioritize non-discrimination; analysts prioritize explainability; shareholders prioritize loss ratios. All four lists are kept distinct.

4. Ethical Requirements Definition

Values are translated into Ethical Value Requirements (EVRs) — testable, assignable statements of what the system must do (or must not do) to honor each value. Each EVR records its source value, source stakeholder group, acceptance criteria, and verification method. This step is where IEEE 7000 becomes concrete; it is also where abstract ethical debate ends and engineering design begins. Examples appear in the EVRs section.

5. Risk-Based Design

EVRs are weighed against each other and against technical, financial, and schedule constraints. Trade-offs are explicitly recorded rather than silently resolved. If a privacy EVR and an accuracy EVR conflict, the team documents the options considered, the reasoning, and the stakeholder voices consulted. The output is a design decision log — the single most valuable artifact for later audits under the EU AI Act and ISO 42001 clause 8.2.

6. Transparency Management

The team defines what will be disclosed, to whom, in what form, and when. Transparency is decomposed by audience: end users get plain-language explanations; operators get operational documentation; auditors get the full design decision log; affected non-users get channels to request information. The artifact is a transparency plan that maps each disclosure obligation (regulatory, contractual, ethical) to a specific document, channel, and owner.

7. Accountability Management

For each EVR, a responsible role is named. For each foreseeable failure mode, an escalation path is defined. For each stakeholder grievance channel, a response protocol is established. This step answers the auditor’s perennial question: “when this fails, who owns the fix, and how long do they have?” The output combines a RACI matrix with an incident-response playbook keyed to EVRs.

8. Ethical Operational Integration

EVRs are turned into runtime hooks: monitors, alerts, logs, and controls that enforce the EVRs during operation. A “no inference on users under 13” EVR becomes an input validator and an audit-log rule; a “fair error-rate parity” EVR becomes a scheduled fairness test with threshold alerting. The output is a control specification linking each EVR to its runtime enforcement mechanism.

9. Risk Review

Periodically (typically quarterly, and after any material system change) the team reassesses whether the EVRs still reflect stakeholder values and whether the system still honors the EVRs. New stakeholder groups may have emerged; new risks may have materialized; earlier trade-offs may have become obsolete. The output is a risk-review memo that either confirms the current design or triggers targeted re-work.

10. Continuous Improvement

Lessons from risk reviews, incidents, and stakeholder feedback feed into two loops: improvements to this specific system, and improvements to the organization’s IEEE 7000 playbook. Common improvements include expanding the value library, tightening elicitation techniques, and adding new EVR templates. Over time, the organization’s IEEE 7000 capability becomes a reusable asset rather than a per-project cost.

Value elicitation techniques {#elicitation}

IEEE 7000 does not prescribe a single elicitation technique. It expects the team to select techniques based on stakeholder accessibility, time budget, and the sensitivity of the values being surfaced. The five techniques below cover the majority of practical situations.

TechniqueWhen to useStrengthsLimitations
Structured workshopsStakeholders are reachable, literate in the domain, and willing to participate openlyRapid convergence, direct dialogue, visible disagreementSelection bias toward willing participants; strong voices can dominate
SurveysLarge stakeholder populations, need for quantitative priority rankingScale, anonymity, statistical defensibilityShallow; loses nuance; framing effects
Ethnography and contextual inquirySystem will affect daily work or lived experience; values are tacit rather than articulatedSurfaces values that stakeholders cannot self-reportTime-intensive; limited sample size
Document reviewRegulated domains (healthcare, finance, education) with rich prior-artLeverages codified values from policy, case law, professional codesCan calcify outdated assumptions if not balanced with fresh input
Adversarial analysisHigh-risk systems, affected-community exposure, potential for misuseSurfaces values of stakeholders who will not participate (bad actors, silent harmed parties)Speculative; requires domain expertise

In practice a single project runs three to five of these techniques in parallel and triangulates the results. The triangulation itself is evidence: showing that the same value appeared across workshops, surveys, and adversarial review defends the EVR against later “we weren’t consulted” claims.

A useful sequencing heuristic: begin with document review (cheap, establishes baseline), follow with workshops (depth with accessible stakeholders), run surveys for scale, commission ethnography where values are tacit, and close with adversarial analysis to stress-test the picture.

Ethical Value Requirements examples {#evrs}

Each EVR must be testable. Vague statements like “the system will be fair” are not EVRs; they are values. An EVR translates a value into a verifiable engineering requirement. The table below shows representative EVRs across the five most commonly surfaced value categories.

ValueStakeholder sourceEthical Value Requirement (EVR)Verification method
PrivacyCustomers, data-protection authorityThe system shall not retain input payloads beyond 7 days unless the user consents to extended retention in the data lifecycle policy (DLP-03)Automated retention audit; quarterly DLP compliance scan
PrivacyCustomersThe system shall offer a one-click data deletion request that completes within 30 days across all derived datasets, model caches, and backupsEnd-to-end deletion test with traced record; ISO 27001 evidence
FairnessAffected loan applicants, regulatorThe system shall maintain false-negative rate parity within 2 percentage points across protected demographic groups, measured monthly on holdout dataMonthly fairness dashboard; external audit sample
FairnessAffected applicantsWhen the system denies a request, the user shall receive a human-readable explanation citing the top three contributing features within 48 hoursExplanation-coverage metric; user-complaint rate
TransparencyEnd users, regulatorThe system shall disclose its AI nature and decision role at every user interaction in language at or below a US grade-8 reading levelReadability test (Flesch-Kincaid ≤ 8); UX audit
TransparencyOperators, auditorsThe system shall log the model version, input hash, output, and decision rationale for every production inference for 7 yearsLog-integrity check; WORM storage verification
AutonomyEnd usersThe system shall provide an always-available human override channel and shall not retaliate (via scoring, rate limiting, or deprioritization) against users who invoke itOverride-availability SLO; retaliation-detection monitor
AutonomyOperatorsAny automated decision affecting employment, credit, or benefits shall be reversible by a named human within 5 business daysReversal SLO; governance audit
SafetyEnd users, regulatorThe system shall refuse outputs where confidence falls below 0.85 and shall route such cases to a human reviewerConfidence-gate test; reviewer queue audit
SafetyOperatorsThe system shall auto-disable inference if the monitored data-drift score exceeds 3 standard deviations from baseline, pending human re-validationDrift-monitor test; disable-log audit

Well-formed EVRs share four properties: (1) they are verifiable with a specific method; (2) they are owned by a named role; (3) they carry an acceptance threshold that is either numeric or boolean; and (4) they are traceable to a value and a stakeholder group.

Traceability: value → requirement → control → evidence {#traceability}

IEEE 7000 is ultimately an evidence discipline. The entire process collapses into a traceability matrix that an auditor can walk end-to-end.

ValueStakeholderEVRDesign decision (step 5)Runtime control (step 8)Evidence artifact
PrivacyCustomersEVR-P-01 (7-day retention)Switched from batch warehouse to TTL-backed cache for raw inputsScheduled retention-sweep job; automated deletion logsRetention audit report; deletion-job logs
FairnessApplicantsEVR-F-01 (FNR parity ±2pp)Added group-aware threshold tuning; rejected single-threshold approachMonthly fairness test in CI; alert on threshold breachFairness test results; incident register
TransparencyUsersEVR-T-01 (grade-8 disclosures)Centralized disclosure copy library; removed inline legal languagePre-deployment readability check gateReadability report; UX audit log
AutonomyUsersEVR-A-01 (human override)Front-end “talk to a person” always present; SLA contract with supportOverride-invocation metric; retaliation monitorOverride logs; SLA reports
SafetyUsersEVR-S-01 (confidence ≥ 0.85)Hard gate in inference pipeline; fallback to human reviewConfidence histogram monitor; reviewer SLA dashboardGate-triggered logs; reviewer audits

This matrix is the single artifact that satisfies the largest number of overlapping obligations: NIST AI RMF MAP 1.6, MAP 2.3, and MANAGE 2; ISO 42001 clauses 6.1.4 and 8.2; EU AI Act Article 9 risk-management documentation; and internal audit trails. Maintaining the matrix as living documentation — not a one-time deliverable — is what separates organizations that merely ran IEEE 7000 from those that operate it.

Mapping to COMPEL stages {#compel-mapping}

COMPEL’s six stages provide a natural scaffold for running IEEE 7000. Each IEEE 7000 step slots into a COMPEL stage without restructuring either framework.

COMPEL stageIEEE 7000 stepsShared output
Calibrate1 Concept Exploration · 2 System-of-Interest AnalysisEthical concept brief · Stakeholder register
Organize3 Ethical Values ElicitationPrioritized value list per stakeholder group
Model4 Ethical Requirements Definition · 5 Risk-Based DesignEVR register · Design decision log
Produce6 Transparency Management · 7 Accountability Management · 8 Ethical Operational IntegrationTransparency plan · RACI · Control specification
Evaluate9 Risk ReviewRisk-review memo
Learn10 Continuous ImprovementLessons-learned log · Playbook updates

The mapping makes IEEE 7000 operable inside the COMPEL rhythm teams already run. Calibrate and Organize-stage gate reviews validate that elicitation is complete before any EVR is drafted; Model-stage reviews validate EVRs before design freeze; Produce-stage reviews validate runtime controls before launch; Evaluate and Learn close the loop with lived operational data.

Evidence artifacts {#evidence}

A complete IEEE 7000 implementation produces the following artifacts. Each maps directly to one or more steps and is retained for the life of the system plus the longer of three years or the applicable regulatory retention period (EU AI Act Article 18 requires ten years for high-risk systems).

  • Ethical concept brief (step 1)
  • Stakeholder register with power, exposure, and demographic attributes (step 2)
  • System-of-interest diagram with boundaries, data flows, and actor map (step 2)
  • Elicitation plan and records — workshop minutes, survey results, ethnography notes, adversarial analysis memos (step 3)
  • Prioritized value list per stakeholder group, with preserved disagreement (step 3)
  • EVR register with source, acceptance criteria, owner, and verification method (step 4)
  • Design decision log with trade-off rationale and stakeholder voices consulted (step 5)
  • Transparency plan with disclosure-to-audience mapping (step 6)
  • Accountability RACI and escalation playbook (step 7)
  • Control specification linking EVRs to runtime monitors, alerts, and logs (step 8)
  • Risk-review memos per review cycle (step 9)
  • Continuous-improvement log and playbook updates (step 10)
  • Traceability matrix (spans all steps)

When IEEE 7000 runs inside an ISO 42001 management system, these artifacts satisfy clauses 6.1.4 (AI system impact assessment) and 8.2 (system design) without duplication. When it runs alongside NIST AI RMF, the same artifacts satisfy MAP 1.6, MAP 2.3, MAP 3.1, and MAP 5.1.

Metrics {#metrics}

Teams that operate IEEE 7000 — as distinct from teams that merely documented it once — report on the following metrics:

  • Stakeholder coverage ratio — percentage of identified stakeholder groups with recorded elicitation artifacts (target: 100%; weighted toward affected non-users)
  • EVR verification rate — percentage of EVRs with a completed verification test in the last review cycle (target: 95%+)
  • EVR breach count — number of times an EVR acceptance threshold was missed, by severity (trending down)
  • Design-decision traceability — percentage of material design decisions with a logged trade-off rationale (target: 100%)
  • Disclosure freshness — percentage of user-facing disclosures last validated in the current quarter (target: 100%)
  • Override utilization — rate of human-override invocation per 1,000 decisions, and resolution time (baseline per system)
  • Risk-review cadence compliance — percentage of systems reviewed on the defined cadence (target: 100%)
  • Mean time from stakeholder feedback to EVR update — calendar days from feedback intake to accepted EVR change (trending down)

These metrics are produced by the same controls, logs, and registers created during the process itself. No parallel measurement program is required.

Risks if skipped {#risks}

Organizations that publish ethical principles but never run a process like IEEE 7000 expose themselves to five predictable failure modes:

  • Principle-to-practice gap. Principles exist in the website’s ethics page but nowhere in the requirements backlog. Engineers cannot implement what is not specified.
  • Affected-community blindside. Non-user stakeholders — those most exposed to AI-induced harm — are never consulted, producing systems that optimize for buyers at the expense of those who live with the outputs.
  • Un-auditable design rationale. When regulators or litigators ask “how did you weigh privacy against accuracy?” there is no record. The burden shifts from “show the process” to “defend the outcome” — a far weaker posture.
  • Reactive ethics. Issues surface through incidents, press, or complaints rather than through design. Remediation is expensive and reputational damage is already done.
  • Regulatory surprise. The EU AI Act Article 9 (risk management system) and Article 27 (fundamental-rights impact assessment) expect systematic stakeholder and value analysis. An organization without it scrambles to reconstruct the record retroactively — a reconstruction regulators treat with appropriate skepticism.

Each of these risks carries measurable cost: one avoided incident under GDPR or the EU AI Act typically exceeds the full lifecycle cost of operating IEEE 7000 on a system.

  • IEEE 7000-2021 — IEEE Standard Model Process for Addressing Ethical Concerns During System Designstandards.ieee.org/ieee/7000/6781/. The source standard; defines the 10 steps and EVR construct.
  • IEEE 7000-series — 7001 (transparency), 7002 (privacy), 7003 (algorithmic bias), 7010 (well-being metrics). Companion standards that plug into IEEE 7000 EVRs.
  • ISO/IEC 42001:2023 — Information technology — Artificial intelligence — Management systemiso.org/standard/81230.html. Clause 8.2 (system design) is the natural host for IEEE 7000 execution.
  • NIST AI Risk Management Framework 1.0nist.gov/itl/ai-risk-management-framework. MAP 1.6 (impact characterization) and MAP 5.1 (likelihood and magnitude of impact) are directly fed by IEEE 7000 outputs.
  • EU AI Act (Regulation 2024/1689)eur-lex.europa.eu. Article 9 (risk management), Article 27 (fundamental-rights impact assessment).
  • IEEE Ethically Aligned Design, First Editionstandards.ieee.org/industry-connections/ec/ead-v2/. The value-elicitation philosophy that underlies IEEE 7000.
  • Friedman, B., Hendry, D. G. (2019). Value Sensitive Design: Shaping Technology with Moral Imagination. MIT Press. The academic lineage of IEEE 7000’s elicitation techniques.

How to cite

COMPEL FlowRidge Team. (2026). “IEEE 7000 Ethical Design Implementation: A 10-Step Value-Based System Design Process.” COMPEL Framework by FlowRidge. https://www.compelframework.org/articles/seo-a2-ieee-7000-ethical-design-implementation/

Frequently Asked Questions

Does IEEE 7000 replace or complement ISO 42001?
Complement. ISO 42001 is a management-system standard; IEEE 7000 is a product-level design process. Use IEEE 7000 inside ISO 42001 clause 8.2 (system design) to generate ethical value requirements.
What are Ethical Value Requirements (EVRs)?
EVRs are functional and non-functional requirements derived from stakeholder values. Each EVR ties to a value, a stakeholder group, and a verification method.
How does IEEE 7000 relate to NIST AI RMF MAP 1.6?
NIST AI RMF MAP 1.6 requires characterizing the system's potential impacts on individuals, groups, and society. IEEE 7000's Ethical Values Elicitation (step 3) and System-of-Interest Analysis (step 2) produce exactly the stakeholder and impact evidence that MAP 1.6 asks for, making IEEE 7000 a natural implementation path for that category.
Can a small team adopt IEEE 7000 without a dedicated ethicist?
Yes. IEEE 7000 is designed to be run by the product team with access to ethical subject-matter support rather than a permanent in-house ethicist. Teams typically pair a business analyst, a designer, and a governance lead; external ethics advisors are engaged for specific elicitation workshops and risk reviews.
How much does IEEE 7000 add to a project timeline?
In practice, the first run adds two to four weeks of elicitation and requirements work before design freeze, spread across sprints. On subsequent systems the process compresses to days because value libraries, templates, and stakeholder registers can be reused.