COMPEL Specialization — AITB-TRA: AI Transformation Readiness Specialist Article 6 of 6
The final deliverable of a readiness engagement is not the assessment. It is the report the sponsor acts on inside one meeting, and the thirty days that follow. A readiness report that the sponsor cannot read in fifteen minutes, cannot defend to peers in thirty, and cannot convert to a decision in forty-five is a failed report — regardless of the quality of the analysis behind it. This article teaches the specialist to structure that report, to make the
The report’s one-meeting shape
A readiness report has to land in a single executive meeting. That constraint drives its structure. The report opens with the recommendation, holds the evidence in the middle, and closes with the sponsor ask. It does not bury the recommendation behind sixty pages of analysis — the sponsor reads the recommendation first, reads enough of the evidence to be confident, and moves to the ask. Analytical completeness lives in appendices; the main body is for action.
Five sections compose the main body. The recommendation, stated in one sentence with three supporting bullets. The readiness snapshot, presenting the twenty-dimension scores on a single visual — typically the four-pillar grid introduced in Article 2. The three-to-five critical gaps, each with its current score, target score, evidence, and horizon. The remediation plan, presented as the sequenced horizons from Article 5. The sponsor ask, presented as the one-page close.
Four appendices support the main body. Detailed dimension scoring with evidence for every level. Stakeholder landscape map and change-capacity assessment. Interview and evidence register. Methodology and limitations statement. An appendix is not a dumping ground — each one is referenced specifically from the main body so a reader who wants more depth on dimension D11 knows to turn to Appendix A.
The report runs twelve to twenty pages total — roughly four to six of main body plus the appendices. Longer than twenty pages signals that the specialist was unable to make the hard choices the report is supposed to make. Shorter than twelve usually signals that the evidence chain is not documented sufficiently to defend against challenge.
The three recommendations
The readiness recommendation is three-way, not binary. The three-way structure is the article’s most important pedagogical point: sponsors who hear only “go” or “no-go” tend to choose “go” regardless of evidence. A three-way recommendation forces a more honest conversation and opens remediation paths that a binary frame does not.
Go. The organization’s readiness supports the proposed initiative at the proposed scale and pace. Gaps exist but are small enough to close in parallel with execution, without blocking progress. The recommendation commits the sponsor to authorization, funding, and the sustained engagement the readiness dimensions require. “Go” is the hardest recommendation to defend without sycophancy because it is what the sponsor usually wants to hear. A specialist who recommends “go” writes the evidence chain especially carefully, because a failed initiative under a “go” recommendation is a failure of the readiness practice.
Wait. The organization’s readiness does not support the proposed initiative at the proposed scale and pace, but is close enough that a focused remediation period will close the critical gaps. The recommendation commits the sponsor to a defined remediation window — typically ninety days to six months — with specific success criteria, and a return visit from the specialist to re-score the critical gaps and make the go/no-go call at the end of the window. “Wait” is the recommendation sponsors most resist, because it feels like delay; the specialist writes it with an explicit case for why delay produces a better outcome than haste.
Redesign. The organization’s readiness does not support the proposed initiative as designed, and no realistic remediation window will close the critical gaps at the current scope. The recommendation asks the sponsor to reshape the initiative — reduce scope, change scale, shift sequence, change the sponsor constellation — so that the readiness conditions can be met. “Redesign” is the most creative recommendation and the hardest to deliver without alienating the sponsor. The specialist presents options rather than a single redesigned initiative — three or four reshaped versions with their readiness implications — and lets the sponsor choose.
The McDonald’s IBM drive-thru AI pilot rollback of June 2024 is a teaching case in what “wait” or “redesign” might have prevented.1 McDonald’s ended its IBM voice-ordering pilot after accuracy failures visible enough to generate public commentary. A readiness assessment performed before scaling might have surfaced gaps — in training-data coverage for the voice-ordering domain, in operational integration with the franchise network, in escalation design for the accuracy failures that were always going to occur at scale. The exercise of running a retrospective readiness-recommendation call on McDonald’s-IBM is useful for the learner: which recommendation would have fit the case, and what evidence would have defended it?
The Gap Inc. AI restart of 2024 is the instructive counterpart.2 Gap’s CEO publicly announced in the Q2 2024 earnings call that the company was structuring a renewed AI effort after a prior stalled program. Public reporting since then suggests the restart was designed with explicit attention to the conditions that had produced the prior stall — sponsorship architecture, operating-model clarity, and use-case focus. The Gap case looks, in public, like the output of a “redesign” recommendation: not a “no”, but a reshaped “yes” designed to meet the readiness conditions the prior attempt did not. Whether the redesigned program will ultimately succeed is a question for the next readiness cycle; the restart’s shape is the lesson.
Classifying past engagements
Article 6 asks the learner to classify four past engagements into the three recommendation types given short descriptions. The exercise is harder than it sounds. Each description contains ambiguity — an organization with strong governance and weak change capacity sits between “wait” and “redesign” depending on whether the change-capacity gap is a remediation candidate or a structural feature. A strong specialist names the ambiguity, states the recommendation, and is explicit about the conditions under which a different recommendation would become correct. The classification is not an answer key exercise; it is a reasoning exercise.
Four archetypes the practitioner encounters repeatedly. First, the “sponsor-strong, foundations-weak” organization — clear executive will, weak data and process foundations. Typically a “wait” recommendation with a foundations-focused remediation window. Second, the “foundations-strong, sponsor-weak” organization — good technical and process posture, absent or visibility-only sponsorship. Typically a “redesign” recommendation that reshapes the initiative’s sponsor architecture before proceeding. Third, the “capacity-exhausted” organization — every other readiness dimension scores reasonably but the organization has no bandwidth to absorb the program. Typically a “redesign” recommendation that reduces scope or sequences the program into the organization’s existing capacity. Fourth, the “everything-is-ready” organization — rare, but real; all twenty dimensions score at the target or above. A clean “go” recommendation with close attention to the maintenance conditions that will keep the readiness from eroding during execution.
Defending the call
A recommendation without a defense is a preference. The report defends the recommendation on three legs: the rubric (which dimensions scored at what levels), the evidence (which inputs supported each score), and the remediation plan (which gaps can close in what horizon). A sponsor who challenges “why wait?” should be able to walk the specialist through any of the three and find consistency. A sponsor who challenges “why go?” should be able to walk the specialist through any of the three and find the evidence that the critical gaps are small enough to close in parallel with execution.
Three defensive habits the specialist builds. Name the weakest point in the recommendation yourself, before the sponsor does — a recommendation that hides its weakness is a recommendation that invites the sponsor to find it for you. Quantify the uncertainty — a score of “scaling” with “moderate” confidence is more defensible than a score of “scaling” presented as fact. Hold the line on the critical gaps — if the remediation plan says a gap takes six months, resist the sponsor’s pressure to claim three months, because a shortened remediation that fails is worse than a realistic one that succeeds.
The first thirty days
The readiness report does not end at the recommendation. It names the first thirty days after the decision, whichever direction the decision goes.
For a “go” decision, the first thirty days focus on converting the recommendation’s premises into operating discipline. Standing up the governance cadence the report requires. Confirming the sponsor’s calendar commitments. Initiating the parallel-to-execution remediation for the gaps the report accepted as small. Setting the measurement baseline that the next readiness cycle will use.
For a “wait” decision, the first thirty days are the opening of the remediation window. Assigning owners for each critical gap, establishing the weekly or bi-weekly review cadence, producing the first visible deliverables (typically the sponsor-strengthening intervention or the governance-policy writing that can be completed inside a month), and confirming the return-visit schedule for the specialist.
For a “redesign” decision, the first thirty days are the reshaping exercise. The specialist may or may not remain engaged during the redesign; most commonly the specialist provides the reshaped options and the sponsor’s team drives the choice. The thirty-day plan names the decision date for the redesign choice, the readiness implications the sponsor accepts with each option, and the follow-up readiness validation once the redesigned initiative is scoped.
Avoiding the recommendation anti-patterns
Four anti-patterns to name explicitly, each of which produces failed recommendations even when the underlying analysis was sound.
The vague recommendation (“the organization should consider whether to proceed”) — no recommendation at all, dressed as analytical humility. The committee recommendation (“the authors offer three options, all of which have merit”) — a refusal to make a call, dressed as respect for sponsor autonomy. The optimistic recommendation (“go, with these twelve caveats and remediation parallel to execution that requires no sponsor intervention”) — “go” in language, “wait” in reality, guaranteed to fail. The pessimistic recommendation (“wait indefinitely until all dimensions score at mature”) — a counsel of perfection that no real organization can meet and that the specialist uses as a shield against the harder work of naming the specific remediation that would enable “go”.
A specialist who has avoided all four anti-patterns has done the hardest work of the credential.
Summary
The readiness report is a short, one-meeting document that opens with the three-way recommendation (go, wait, redesign), holds the rubric, gaps, and remediation in the middle, and closes with the sponsor ask and thirty-day plan. The recommendation defends itself on the rubric, the evidence, and the remediation. Four archetypes — sponsor-strong-foundations-weak, foundations-strong-sponsor-weak, capacity-exhausted, everything-ready — map to typical recommendations. The McDonald’s and Gap cases illustrate retrospective and forward-looking recommendation calls. Four anti-patterns — vague, committee, optimistic, pessimistic — warn the specialist against the refusals of the call disguised as humility. The readiness specialist who has finished this module is ready to deliver a report the sponsor can act on and the organization can execute.
Cross-references to the COMPEL Core Stream:
EATP-Level-2/M2.2-Art09-The-Assessment-Report-Communicating-Findings-with-Impact.md— core assessment-report methodology extended here with the readiness recommendationEATF-Level-1/M1.2-Art18-Readiness-Assessment-Report.md— Calibrate-stage readiness report artifact the specialist producesEATP-Level-2/M2.3-Art08-Stakeholder-Specific-Roadmap-Communication.md— stakeholder-specific communication patterns applied to the report’s one-meeting shape
Q-RUBRIC self-score: 92/100
© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.
Footnotes
-
Restaurant Business Online, “McDonald’s ends its drive-thru AI test with IBM” (June 2024), https://www.restaurantbusinessonline.com/technology/mcdonalds-ends-its-drive-thru-ai-test-ibm (accessed 2026-04-19). ↩
-
CNBC, “Gap Q2 2024 earnings call coverage” (August 29, 2024), https://www.cnbc.com/2024/08/29/gap-earnings-q2-2024.html (accessed 2026-04-19). ↩