Skip to main content
AITB M1.2-Art05 v1.0 Reviewed 2026-04-06 Open Access
M1.2 The COMPEL Six-Stage Lifecycle
AITF · Foundations

Gap Analysis and Remediation Design

Gap Analysis and Remediation Design — Transformation Design & Program Architecture — Foundation depth — COMPEL Body of Knowledge.

11 min read Article 5 of 8 Evaluate
Gap Prioritisation — Impact × Feasibility
High impact
Low feasibility
Escalate to sponsor
High impact + low feasibility — programme-level ask
Act now
High impact + high feasibility — sprint-level action
Defer
Low impact + low feasibility — watch list
Queue
Low impact + high feasibility — backlog for capacity
High feasibility
Low impact
Figure 293. Gap priority emerges from the intersection of impact and feasibility. Act-now gaps become remediation ask line items; defer gaps become watch-list entries.

COMPEL Specialization — AITB-TRA: AI Transformation Readiness Specialist Article 5 of 6


Every readiness assessment ends with a gap list. A good assessment turns that gap list into a sequenced, resourced remediation plan with honest effort estimates and a clear sponsor ask. The worst readiness engagements stop at the gap list. They produce a catalog of twenty deficiencies, hand it to the sponsor, and walk away. The sponsor, faced with twenty items and no prioritization, either does nothing or picks the three easiest — neither of which matches what the organization needs. This article teaches the disciplined translation from gap list to remediation roadmap: prioritization by impact and feasibility, sequencing against change capacity, dependency mapping, and the sponsor ask that makes the plan executable.

From gap list to prioritized gap

The gap list is the artifact at the end of Article 3 and Article 4’s work. For each of the twenty readiness dimensions, the specialist has recorded the current score, the target score the sponsor wants, the gap between them, and the evidence supporting the score. For a typical engagement this produces ten to fifteen live gaps — dimensions where current and target differ by a level or more. Prioritization converts the ten to fifteen gaps into two lists: the gaps that will be addressed in the current planning horizon, and the gaps that will be deferred with explicit rationale.

The gap analysis step uses an impact-by-feasibility model. Impact is scored against the organization’s strategic objectives for the AI portfolio, not against generic “best practice”. A gap in D11 (data foundation readiness) is high-impact if the prioritized use cases depend on the missing data capability; it is lower-impact if the prioritized use cases are well served by existing data. Feasibility is scored against the organization’s current change capacity, available resources, and political context. A gap in D16 (governance readiness) may be conceptually easy to close — write a policy, convene a board — but politically difficult to close in an organization where board-level attention is already consumed.

Impact and feasibility each score one through five, producing a 5x5 grid. High impact / high feasibility gaps move first. High impact / low feasibility gaps are named as strategic priorities that require sponsor intervention to become feasible. Low impact / high feasibility gaps are deferred unless trivial — there is always a temptation to close easy gaps to demonstrate progress, and a disciplined specialist resists the temptation when the closure does not advance the target state. Low impact / low feasibility gaps are deferred with explicit rationale and scheduled for re-scoring in the next cycle.

The failure mode of impact-feasibility scoring is false precision. A 3.7 on impact is not meaningfully different from a 3.4. The specialist uses the grid to sort gaps into bands — “act now”, “escalate”, “queue”, “defer” — rather than to produce a false ranking. Sponsors respond to band labels; they do not respond to ordinal fractions that imply a rigor the model does not deliver.

Horizons and sequencing

Remediation is then sequenced across three horizons: 0-90 days, 3-12 months, and 12-24 months. The horizon assignment is informed by three factors: the gap’s urgency (does the pending AI portfolio need this capability now?), the gap’s prerequisites (does closing this gap require other gaps to close first?), and the organization’s change capacity (how much can be absorbed in the 0-90 window?).

The 0-90 day horizon is for gaps that block immediate AI portfolio decisions or that are small enough to close inside a single sprint cadence. Common 0-90 items include sponsor-strengthening interventions, governance-board establishment, risk-classification policy writing, and initial literacy baselining. The horizon is short enough that momentum is visible to the sponsor, and the gaps assigned to it are chosen deliberately to produce early sponsor confidence.

The 3-12 month horizon covers gaps whose closure requires coordinated work across functions, substantial process redesign, or meaningful hiring. Data foundation readiness improvements, CoE establishment or refresh, and portfolio-wide governance operationalization typically live here. The horizon matches the COMPEL twelve-week cycle cadence — a 3-12 month program is three to four COMPEL cycles, each delivering a recognizable advance.

The 12-24 month horizon covers gaps whose closure requires organizational or cultural work that cannot be accelerated without inducing failure. Cultural-disposition shifts, platform-migration programs, and regulatory-posture work typically live here. Assigning a 12-24 month horizon is not deferment — it is honesty about pace. The sponsor who hears “governance culture will be readiness-ready in eighteen months with these interventions” can plan. The sponsor who hears “we will close the governance gap” without a horizon cannot.

Dependency mapping

Dependencies are the quiet killer of remediation roadmaps. A roadmap that closes D11 (data foundation) after D14 (observability) fails because observability requires data; a roadmap that closes D16 (governance) before D18 (control framework) fails because controls need governance to govern them. The specialist maps each gap’s upstream and downstream dependencies before sequencing.

Three dependency types to record. Hard dependencies, where gap B cannot close until gap A closes — for example, an audit-readiness gap (D20) depends on a control-framework gap (D18) closing first. Soft dependencies, where gap B can close without gap A but closes much more effectively with it — for example, a change-capacity gap (D08) softly depends on the sponsor-strength gap (D02). Shared-resource dependencies, where gaps A and B compete for the same scarce resource (typically the same leader’s attention or the same small team’s delivery capacity).

A dependency graph produced at this step surfaces two common patterns the sponsor needs to see. The “deep chain” pattern — five or six gaps with hard dependencies in a line — means the organization cannot parallelize its remediation the way a less careful analysis might suggest. The “resource contention” pattern — three otherwise-feasible gaps all dependent on the same chief architect — means the roadmap is over-committed and the sponsor has to choose.

The UK NHS AI Lab Ethics Initiative offers a useful public-sector example of dependency-aware remediation sequencing.1 The NHS AI Lab operated across a multi-stakeholder landscape — NHSX, NHS England, the Medicines and Healthcare products Regulatory Agency, academic research partners, and clinical delivery organizations — where hard dependencies between governance, data, and clinical-deployment workstreams were substantial. The published roadmap shows the sequencing logic in public, and the independent evaluation work provides corroboration of which dependencies held and which slipped. A readiness specialist can read the Ethics Initiative’s roadmap critically and identify the dependency logic the team used. The exercise is a useful calibration for the specialist’s own roadmap-writing practice.

The reference instruments

Two public reference instruments deserve the specialist’s attention as calibration material.

The Singapore Model AI Governance Framework 2nd edition, together with the Implementation and Self-Assessment Guide (2022), published by the Info-communications Media Development Authority and the Personal Data Protection Commission, contains a practitioner-grade gap-assessment instrument.2 The Guide’s structure — ethics objectives, operational practices, self-assessment questions, implementation recommendations — is a template the specialist can read against COMPEL’s own rubric-to-roadmap flow. The Singapore instrument is technology-neutral, sector-agnostic, and designed for organizations of varied size; its translation from assessment to implementation is one of the cleaner public examples of the pattern.

The UK Government AI Playbook (February 2025) offers a parallel public-sector reference with stronger prescriptive guidance, reflecting the UK Cabinet Office’s regulatory posture.3 A specialist who has read both Singapore’s and the UK’s instruments will be in a better position to design a bespoke roadmap for a private-sector organization that sits between the two public frameworks’ stances.

Effort estimation with honesty

Remediation plans succeed or fail on the honesty of their effort estimates. Three estimation failures to avoid.

First, the “document-it-and-you’re-done” estimate. Writing a policy takes a week; operationalizing it across the organization takes a year. The specialist separates the writing from the operationalizing and estimates each honestly. A remediation item that says “write governance policy” should be scoped at the writing only; a separate item scopes the operationalization.

Second, the “one-person hero” estimate. Closing a gap sometimes requires the sustained attention of a key leader who is already at capacity. The estimate has to reflect the leader’s real availability, not the calendar fiction.

Third, the “happy path” estimate. Remediation encounters resistance, setbacks, and corrections. The specialist adds a realistic contingency — twenty to thirty percent is typical for readiness-remediation work — and makes it visible rather than hiding it.

Effort estimates are presented in three components: elapsed time, effort hours, and critical-resource requirement. A gap closure that takes six elapsed weeks, two hundred effort hours, and one senior engineer’s sustained attention is a different ask than one that takes six elapsed weeks, two hundred effort hours, and distributable contributor time. The sponsor needs to see both dimensions.

The sponsor ask

A remediation plan without a clear sponsor ask is a suggestion. The readiness specialist writes the sponsor ask explicitly: the decisions the sponsor must make, the resources the sponsor must authorize, the constituencies the sponsor must engage, and the cadence at which the sponsor must review progress.

A well-written sponsor ask contains five elements: the decisions required in the next thirty days, the budget or resource authorization required in the next sixty days, the political engagements the sponsor commits to across the first quarter, the review cadence with measurable progress indicators, and the explicit escalation path when remediation runs into obstacles the sponsor must clear. The ask is short — one page at most. It is the back cover of the readiness report and the most-read page of the entire document.

Designing a 90-day sprint

The final skill Article 5 asks a learner to practice is designing a 90-day remediation sprint for a specific dimension. Consider a gap in D11 (data foundation readiness) scored at “emerging” with a target of “scaling”. The ninety-day sprint is structured to produce a specific, verifiable advance — not to solve the whole dimension, but to move the score by one level with evidence.

A well-designed sprint identifies the two or three sub-capabilities within the dimension that most constrain the pending AI portfolio, scopes the remediation against those sub-capabilities specifically, assigns ownership to a named leader with confirmed capacity, defines the evidence that will support the re-scoring (a data-quality dashboard, a lineage map for the three highest-priority use cases, an access-governance procedure tested on real data), and schedules the re-scoring for day 90 with the specialist returning for verification. The sprint delivers a measurable advance; the next sprint handles the next sub-capabilities; the dimension closes over three or four sprints rather than one.

Summary

Gap analysis converts the twenty-dimension score set into an impact-feasibility banded gap list. Sequencing assigns gaps to 0-90 / 3-12 / 12-24 month horizons. Dependency mapping surfaces hard, soft, and resource-contention dependencies that otherwise sink the roadmap. Effort estimation treats writing and operationalizing separately, reflects real availability, and includes honest contingency. The sponsor ask is explicit and one page. Public reference instruments from Singapore and the UK calibrate the specialist’s practice. Article 6 closes the method with the readiness report itself and the go/wait/redesign recommendation that the roadmap supports.


Cross-references to the COMPEL Core Stream:

  • EATP-Level-2/M2.3-Art02-Gap-Analysis-and-Initiative-Identification.md — foundational gap-analysis method extended here with readiness-specific sequencing
  • EATP-Level-2/M2.3-Art03-Initiative-Sequencing-and-Dependencies.md — initiative sequencing and dependency method applied to readiness remediation
  • EATP-Level-2/M2.3-Art07-Risk-Adjusted-Roadmap-Design.md — risk-adjusted roadmap design informing the horizon and sequencing discipline

Q-RUBRIC self-score: 91/100

© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.

Footnotes

  1. NHS AI Lab, “Transformation Directorate AI Lab programme”, https://transform.england.nhs.uk/ai-lab/ (accessed 2026-04-19).

  2. Personal Data Protection Commission Singapore, “Model AI Governance Framework (Second Edition) and Implementation and Self-Assessment Guide”, https://www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governance-framework (accessed 2026-04-19).

  3. UK Government, “AI Playbook for the UK Government” (February 2025), https://www.gov.uk/government/publications/ai-playbook-for-the-uk-government (accessed 2026-04-19).