Most enterprise AI programs have at least one feature that should have been sunset six to twelve months earlier than it was. The delay has predictable sources: sponsor attachment, sunk-cost bias, organizational embarrassment, fear of setting precedent, and absence of a structured sunset pathway. The sunset-case discipline exists to overcome those sources by making the sunset conversation a routine governance decision rather than an exceptional surrender.
This article teaches the sunset-case structure, the three dignity-preserving narratives that help the organization absorb the decision, and the documented public cases (Zillow Offers, McDonald’s–IBM drive-thru) that the AI value practitioner should study for pattern recognition.
The sunset-case structure
A sunset case has five sections, parallel to but distinct from the business case (Article 6).
Section 1 — Current state assessment
What the feature is currently delivering and consuming. Realized value over the last four quarters (with counterfactual method). Total cost of ownership over the same window. Cost per successful outcome. Direction of travel (value declining? cost rising? both?).
Current-state data should come from the same sources as the VRR (Article 16) — not a separate retrieval. Using VRR data maintains consistency with the public narrative.
Section 2 — Gap to business-case expectations
The original or most recently updated business case (per Article 31’s updated-case discipline) projected certain realized value at this point in the lifecycle. Actual value is compared; variance is decomposed into adoption shortfall, counterfactual revision, cost overrun, or external context change. Explicit decomposition prevents the “something didn’t work” explanation that obscures learning.
Section 3 — Alternatives analysis
Four alternatives are considered, each with its realized-value and TCO implications.
Alternative A — Continue unchanged. What realized-value and TCO trajectories are expected if nothing changes?
Alternative B — Modify. What scope or design changes could restore value? What are their estimated cost and effect? What is the evidence base?
Alternative C — Retire and replace with alternative solution. A cheaper non-AI workflow, a different AI feature, or a different vendor approach. What is the substitution’s TCO and expected value?
Alternative D — Retire without replacement. The workflow the AI feature supports returns to the pre-AI state, or the workflow itself is eliminated.
The sunset case recommends one alternative; the other three are documented with their reasoning for rejection. This discipline prevents the common failure where the sunset case reads as “we give up” rather than “we compared options and this is best.”
Section 4 — Retirement plan
If Alternative C or D is recommended, the retirement plan covers: timeline; communication plan to users; data-handling plan (what happens to collected user interaction data, outputs, logs); knowledge-capture plan (what learnings are recorded and where); regulatory-obligation wind-down (if the feature was under EU AI Act high-risk classification, what documentation must be preserved for how long); and cost-elimination schedule (how quickly run-rate costs fall to zero).
Section 5 — Learning documentation
What the feature taught the organization. This section feeds Gate 6 (Learn) from Article 31. The learning section is the most important section for the organization’s broader AI program — the sunset case’s greatest gift is often not the cost reduction but the pattern recognition that prevents the same error in the next feature.
The three dignity-preserving narratives
Sunset cases must be politically viable. A sunset case that reads as “this was a mistake” is a sunset case that stalls in committee. Three narratives preserve organizational dignity while still reaching the right decision.
Narrative 1 — Learning complete
The feature was always planned as an experiment. The experiment has produced its learning. The organization now understands the opportunity better; the feature served its purpose; the next investment will be better-informed.
This narrative works for features that were never strategically critical and that produced useful learning even if they did not produce sustained value. It requires that the feature was positioned as learning-seeking at its inception — a retrofit of this narrative to a feature that was pitched as strategically essential is less credible.
Narrative 2 — Cheaper alternative available
The feature worked; it is being retired because a better alternative has emerged. The alternative may be a new AI model at lower cost, a non-AI workflow that captures 80% of the value at 10% of the cost, or a different approach that sidesteps the original design constraints.
This narrative works when the alternative genuinely exists and the substitution is the reason for retirement. The McDonald’s–IBM drive-thru AI pause, discussed below, fits this pattern — the retired feature was replaced with a different approach rather than simply discontinued.1
Narrative 3 — Strategic pivot
The organization’s strategy has shifted; the feature no longer supports the priorities the strategy now emphasizes. The feature may still work; the resources are needed elsewhere.
This narrative works when a genuine strategic pivot has occurred (new CEO, new product direction, major market shift) and the feature’s discontinuation is a natural consequence. It does not work as a cover for feature failure — sophisticated readers will not buy “strategic pivot” when the underlying reality is “the feature did not deliver.”
The practitioner’s job is to match narrative to reality. A feature genuinely retired for pivot reasons uses Narrative 3; a feature that failed its measurement thresholds uses Narrative 1 or 2, depending on circumstance. Using the wrong narrative — matching dignity to intent rather than to fact — makes the learning superficial and invites a repeat in the next feature.
Two documented cases
Zillow Offers (2021)
Zillow Offers used machine-learning algorithms to price and purchase homes for resale in the iBuying business. In November 2021, Zillow announced it was shutting down the Offers program, disclosed an approximately $540 million impairment in its Q3 filing, and laid off about 25% of its workforce.2 The Wall Street Journal, Bloomberg, and SEC filings documented the shutdown sequence in detail.
The Zillow Offers case is the canonical shipped-but-not-realized failure (Article 2) and a sunset case at significant corporate scale. Publicly-available post-mortems attribute the failure to several compounding factors: algorithmic mispricing, supply-chain disruption in home remodeling, and difficulty forecasting in the post-pandemic housing market. The sunset decision was substantially delayed: the program’s struggles were visible earlier than November, but the decision to shut down required the impairment and workforce reduction to become unavoidable.
Lessons for practitioners: the sunset decision is easier when the feature’s measurement plan explicitly specified a shutdown trigger (which Zillow did not publicly disclose), and the sunset cost grows with delay — later shutdowns cost more in impairment, workforce, and reputational damage.
McDonald’s–IBM drive-thru (2021–2024)
McDonald’s partnered with IBM on automated drive-thru voice AI; the partnership launched in 2021 with a three-year pilot. In June 2024, McDonald’s publicly announced it was ending the partnership, citing that while the AI had value, the approach would be reconsidered. Over a hundred US restaurants had been running the system.3
The McDonald’s case illustrates the cheaper-alternative narrative. The partnership was not framed as failure; it was framed as a pilot whose learning would inform the company’s subsequent choice of approach. Press coverage treated the decision as a pragmatic one rather than a story of AI failure. The sunset-case discipline — a structured decision with a defensible narrative — protected the organization’s AI-program credibility.
Lessons for practitioners: a sunset case delivered on time, with the right narrative, is not a career-ending event; it is a competent execution of governance.
Avoiding the two anti-patterns
Anti-pattern 1 — Silent zombie
A feature continues to run with declining value and rising cost; no sunset case is ever proposed because no one owns proposing it. The feature becomes a zombie — alive on the portfolio scorecard, not actually serving its purpose. Silent zombies accumulate; they consume platform attention and compute budget; they erode the program’s aggregate realized-value figures.
The fix is a quarterly sunset review: every feature is examined for sunset signals; features with two or more signals over two consecutive quarters receive a sunset-case preparation assignment. The assignment does not mandate retirement; it mandates the structured consideration.
Anti-pattern 2 — Sunk-cost defense
A sunset case is proposed; the sponsor argues that the investment to date should be preserved by continuing. This is economically irrational — sunk costs are sunk — but culturally powerful. Organizations that cannot retire features routinely have sponsors who defend investments with sunk-cost reasoning.
The fix is to make sunk-cost reasoning explicitly out-of-bounds in sunset reviews. The decision rule is: “given current TCO and current realized-value trajectory, is the go-forward cost-benefit positive?” Historical investment is not part of the decision.
Cross-reference to Core Stream
EATF-Level-1/M1.2-Art07-Stage-Gate-Decision-Framework.md— stage-gate framework within which sunset decisions live.EATP-Level-2/M2.5-Art14-Building-the-AI-Business-Case-Beyond-Simple-ROI.md— business-case companion.
Self-check
- A feature’s realized value has been below business-case projection for four quarters; the sponsor argues that “we’ve invested too much to stop.” Which anti-pattern is operating, and how do you redirect the conversation?
- A feature is being retired because the CEO has set a new strategic direction; the feature itself still works. Which narrative is appropriate, and what are the reporting implications?
- A sunset-case alternatives analysis shows three alternatives, with Alternative C (retire-and-replace) recommended. What must the retirement plan in Section 4 cover?
- A feature has been showing two sunset signals for three consecutive quarters; no sunset case has been proposed. What governance control is missing?
Further reading
- Zillow Group Q3 2021 SEC Form 10-Q filing.
- Published business-school case studies on enterprise AI retirement.
- BCG, AI at Scale — portfolio-management patterns.
Footnotes
-
Multiple reputable press accounts of McDonald’s–IBM drive-thru AI partnership end (June 2024), including CNBC and Restaurant Business reporting. ↩
-
Zillow Group, Form 10-Q filing with the US Securities and Exchange Commission (Q3 2021). https://investors.zillowgroup.com/ ↩
-
CNBC and Restaurant Business coverage of McDonald’s–IBM drive-thru AI partnership (June 2024). https://www.cnbc.com/ and https://www.restaurantbusinessonline.com/ ↩