Why a Process, Not Just a Board
An ethics board (Article 7) is necessary but not sufficient. Without a process, the board sees only what people choose to bring it, at points in the lifecycle they choose to expose. The process is what ensures that every consequential AI use case enters the board’s view at the right moment, with the right artifacts, and with enough lead time for the board’s input to shape the outcome.
A well-designed process satisfies five criteria. It is complete — every consequential AI use case enters it. It is triaged — the depth of review is proportional to the stakes. It is timed — review occurs at points where decisions are still open. It is artifact-driven — review depends on standardized documentation, not ad-hoc presentations. And it is auditable — decisions, conditions, and dissents are captured for later inspection.
The OECD AI Principles, the EU HLEG Trustworthy AI requirements, and the NIST AI Risk Management Framework all assume the existence of an operational review process. See https://oecd.ai/en/ai-principles, https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai, and https://www.nist.gov/itl/ai-risk-management-framework. The Singapore IMDA Model AI Governance Framework provides an example process structure; see https://www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governance-framework.
The Five Stages
A workable end-to-end process has five stages. The stages are sequential but iterative — earlier stages may be revisited as later stages reveal information.
Stage 1: Intake
Intake is the front door. Every AI use case proposed within the organization enters the process here. The intake form is brief — typically two to four pages — and captures enough to enable triage. The minimum content includes the use case name, the proposing team, the affected populations, the data sources contemplated, the decision the system would make or inform, the operating volume, and the proposed oversight model (Article 5).
The intake form should be the path of least resistance. If it is faster to bypass the form than to complete it, the form will be bypassed. Best practice integrates the intake into the existing project initiation process so that completing it is part of getting a project approved at all.
The output of intake is a triage decision: light review, standard review, or enhanced review. The triage criteria should be published and applied consistently, not invented case by case.
Stage 2: Triage and Risk Classification
Triage assigns the use case to one of three (or more) review tracks based on its risk profile. A workable three-track design:
Light review for use cases that are low-stakes, well-characterized, and similar to previously-approved cases. Light review can be conducted by an ethics function staffer using a checklist; board involvement is by exception only.
Standard review for use cases that are moderate-stakes, novel in some respect, or involving sensitive data or affected populations. Standard review involves the full ethics board at the design gate (Stage 3) and the pre-deployment gate (Stage 5) but typically completes within a few weeks.
Enhanced review for use cases that are high-stakes (the domains of Article 9), involve significant workforce effect (Article 11), or are novel enough to require external input. Enhanced review may extend over months, may include external stakeholder engagement (Article 8), and may require additional artifacts beyond the standard set.
The triage criteria typically include: the consequences of an erroneous decision for an affected individual; the size of the affected population; the presence of protected classes among the affected population; the regulatory environment of the use case; the maturity of the proposed technical approach; and the reversibility of the decisions involved.
Stage 3: Design Review
Design review occurs after the use case has been triaged and the team has begun substantive design work, but before significant resources have been committed. The output of design review is a go/conditional/no-go decision on continuing the project, with documented conditions where conditional approval is granted.
The design review packet typically includes:
- A draft model card (Article 6) covering intended use, in-scope and out-of-scope populations, expected performance metrics, and known risks.
- A draft datasheet for any novel datasets to be used.
- A fairness analysis plan specifying which fairness definitions will be measured and what thresholds will trigger action.
- An explainability plan specifying what explanations will be produced for which audiences (Article 4).
- An oversight design specifying the chosen model and the rationale (Article 5).
- A stakeholder engagement plan for affected communities (Article 8).
- A workforce impact assessment if applicable (Article 11).
- A privacy plan if personal data is involved (Article 10).
The board’s design review converts this packet into a decision. If approved with conditions, the conditions become contractual: the project cannot proceed past defined milestones until each condition is verified.
Stage 4: Build and Verification
Stage 4 is the development period, which proceeds against the conditions set at Stage 3. The ethics function is not generally a participant in day-to-day development but is engaged as the conditions are completed. A working dashboard tracks each open condition, its evidence requirements, and its verifier.
A common failure pattern is for conditions to slip during development under schedule pressure, with the team intending to address them “before launch.” This pattern usually ends with conditions being abandoned at the launch gate. Mitigation: verify conditions as they are completed, not at the end. Each condition should have a defined verifier (often someone outside the build team) and a defined evidence artifact.
Stage 5: Pre-Deployment Sign-Off
Pre-deployment sign-off occurs after the model has been built and tested but before it is exposed to the affected population. The sign-off packet builds on the design review packet:
- The completed model card with measured (not predicted) performance metrics, including disaggregated performance by group.
- The completed bias audit (Article 3) with metric values and threshold comparisons.
- Documentation of all design review conditions and their verification status.
- The completed system card if the deployment is part of a customer-facing product.
- The deployment runbook including monitoring, alerting, escalation paths, and incident response procedures.
The board’s sign-off decision is the final go/no-go for production deployment. A decision to proceed is also a commitment to ongoing oversight: monitoring, periodic re-review, and incident response.
Post-Deployment Stages
While the formal process ends at Stage 5, two additional stages structure the system’s lifecycle.
Stage 6: Continuous Monitoring. Production systems are monitored against the metrics defined in Stage 3 and verified at Stage 5. Threshold breaches trigger re-review. The monitoring infrastructure is typically the same as the model performance monitoring infrastructure (Article 3 covers fairness monitoring specifically).
Stage 7: Periodic Re-Review. High-stakes systems are re-reviewed on a defined cadence — typically annually — even in the absence of triggering incidents. The re-review confirms that the system’s intended use, its operating context, and the affected population are still consistent with the conditions of original approval, and updates documentation accordingly.
Roles and Accountabilities
Five roles must be filled for the process to function.
The proposing team is accountable for completing the intake form, producing the design review packet, executing against approved conditions, and maintaining documentation throughout the lifecycle.
The ethics function owns the process — administering intake, conducting triage, scheduling reviews, drafting decisions for board approval, and tracking conditions to closure. The ethics function is a small specialist team typically reporting to the Chief Ethics Officer or equivalent.
The ethics board (Article 7) makes the substantive go/no-go decisions at design review and sign-off, and adjudicates difficult triage cases.
Independent verifiers verify completion of conditions. The verifier role is typically distributed — security verifies security conditions, privacy verifies privacy conditions, the fairness specialist verifies fairness conditions. The verifier should not be a member of the proposing team.
The accountable executive has decision authority for proceeding when the board is split, for adjudicating disputes between proposing teams and the ethics function, and for ultimate accountability when the system is deployed. The accountable executive is named per system, not per organization.
Resistance to Bypass
A process that can be bypassed will be bypassed. Five mitigations reduce bypass risk.
Integration with project initiation. The intake form is not a separate ethics process; it is part of how projects get approved at all. Bypassing intake means bypassing project approval, which executive sponsors will not condone.
Procurement integration. Vendor-supplied AI systems are subject to the same review as internally-developed systems, with the procurement process providing the trigger. The Algorithmic Accountability Act would extend this principle to federal procurement; see https://www.congress.gov/bill/118th-congress/house-bill/5628.
Audit visibility. Periodic audits of the AI estate (a list of all production AI systems) are compared against the ethics function’s records. Systems in production without an approval record become an accountability conversation, not just a compliance gap.
Executive sponsorship. The accountable executive enforces the process within their function. Where executives undermine the process, the ethics function escalates to the board (the corporate board, not the ethics board). Repeated escalations against a single executive surface a leadership issue.
Cultural reinforcement. The organization’s leaders refer to the process publicly, recognize teams that engage it well, and treat ethics review as a sign of mature engineering rather than an obstacle. Process discipline depends on cultural support.
The IEEE 7000-2021 standard provides procedural guidance applicable to several elements of this process; see https://standards.ieee.org/ieee/7000/6781/. The Partnership on AI publishes case studies of operational ethics processes; see https://partnershiponai.org/. The UNESCO Recommendation on the Ethics of AI provides an international reference; see https://www.unesco.org/en/artificial-intelligence/recommendation-ethics. The World Economic Forum’s responsible AI working groups provide additional procedural references; see https://www.weforum.org/topics/artificial-intelligence-and-machine-learning.
Maturity Indicators
- Level 1: No defined process; ethics review is ad-hoc and inconsistent.
- Level 2: Process exists but is inconsistently applied; coverage of the AI estate is partial.
- Level 3: Process applied to all high-stakes use cases; intake-to-sign-off documented; conditions tracked to closure.
- Level 4: Process integrated with procurement, project initiation, and MLOps; coverage approaches 100% of consequential systems; periodic audits verify completeness.
- Level 5: Process is publicly described; cycle times are measured and improved; the organization shares process artifacts with peers.
Practical Application
Three first actions. First, define the process at a single page — the five stages, the gates, the artifacts, the roles. The single-page version is what people will actually read. Second, integrate intake with the existing project approval process, making completion of the intake form a prerequisite for project funding. Third, run the process for the next three new AI use cases, regardless of how mature it is, and learn by doing. Iterate the process based on what those three cases reveal.
Looking Ahead
Article 15 — the closing article of Module 1.11 — addresses how an ethics program measures its own effectiveness through indicators, audits, and reporting. A program that cannot demonstrate its effectiveness is a program that cannot be defended.
© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.