Skip to main content
AITM M1.5-Art52 v1.0 Reviewed 2026-04-06 Open Access
M1.5 Governance, Risk, and Compliance for AI
AITF · Foundations

Lab: Building a Resistance-Handling Playbook for a Contested AI Rollout

Lab: Building a Resistance-Handling Playbook for a Contested AI Rollout — AI Governance & Compliance — Applied depth — COMPEL Body of Knowledge.

8 min read Article 52 of 15

COMPEL Specialization — AITM-CMD: AI Change Management Associate Lab 2 of 2


Lab brief

Cedar Shield Insurance is a fictional mid-sized property-and-casualty insurer based on composite characteristics drawn from publicly reported insurance-industry AI programmes. The firm is six months into the rollout of an AI-assisted claims-handling tool that supports claims adjusters in initial triage, document review, and payout recommendation. Adoption has stalled at roughly thirty-eight per cent of target; the pattern of stalling is uneven across offices, and the programme sponsor — the chief operations officer — has asked for a resistance-handling playbook to guide the next ninety days. You are the assigned AITM-CMD practitioner. This lab walks you through the diagnostic work and the response design.

Lab inputs

  • Usage data by office showing adoption ranging from twelve per cent (Portland office) to seventy-eight per cent (Austin office), with most offices clustering around thirty per cent.
  • A summary of open-text feedback from a sentiment survey run last month, with the following recurring themes: concerns about the tool’s accuracy on non-standard claims, concerns about the consequences of accepting a wrong tool recommendation, unease about what the programme signals for claims-adjuster headcount over the next three years, a small but vocal group citing ethical concerns about AI use in consequential insurance decisions.
  • A transcript of a town hall held at the Portland office, where three adjusters spoke publicly in opposition to the tool, citing specific cases where the tool’s recommendation was materially different from the adjuster’s judgment.
  • A copy of the CEO’s public communication from three months ago stating that the firm is “committed to augmenting our adjusters’ expertise, not replacing it” and that “no adjuster roles will be eliminated as a direct consequence of this tool’s deployment in the next twelve months” — a commitment whose twelve-month horizon expires in nine months.
  • A reliability report on the tool itself showing that accuracy has been above ninety-three per cent across tested scenarios, with lower accuracy on non-standard claims specifically (the tool is not trained on all edge cases, a known limitation).
  • Anecdotal report from an HR business partner that voluntary attrition among claims adjusters has ticked up in the last two quarters, particularly among high-performers.
  • The firm’s formal risk management department has asked to review any claims-handling process that materially changes its decision architecture before further scale.

Exercise 1 — Surface and classify the resistance signals (20 minutes)

From the lab inputs, identify at least six distinct resistance signals. For each signal, record:

  • The signal in specific operational language (not “resistance” or “pushback”).
  • The signal type on the visibility-by-scope matrix from Article 4 (visible/hidden; individual/systemic).
  • The data source from which you are drawing the signal.
  • The initial hypothesis about what the signal indicates.

A starting completed record for signal one might read: “Signal — the Portland office’s adoption rate is twelve per cent, materially below the firm’s average, with three adjusters speaking publicly against the tool at the town hall. Signal type — visible and approaching systemic (at least at office level). Source — usage data and town-hall transcript. Initial hypothesis — office-level cultural resistance, possibly anchored in a local leadership dynamic; may also reflect office-specific claim mix with higher non-standard rate.”

Complete the six signals. The exercise tests whether you can move from “there is resistance” to specific, operationally-named signals that the programme can act on.

Exercise 2 — Diagnose the cause categories (25 minutes)

For each of your six signals, diagnose the most likely cause category from Article 4’s framework (replacement fear, opacity distrust, scar tissue, ethical objection, status-quo bias, or generic-resistance residual). Then distinguish, for each, whether the signal represents:

  • Legitimate objection — a claim about the rollout that, if accepted, would improve the programme.
  • Status-quo bias — a preference for continuity without a specific defensible claim.
  • Genuine mix — some of both, with the practitioner’s response needing to address the legitimate element while not being held hostage to the bias element.

For each, apply the three diagnostic questions from Article 4: Can the resistor name the specific concern? Can the resistor name the condition under which they would support the change? Does the concern hold up against comparable cases?

An example diagnosis for signal one (Portland office stall) might read: “Cause diagnosis — mix of replacement fear (headcount commitment expires in nine months) and legitimate concern about tool accuracy on non-standard claims (the office’s claim mix is heavier on non-standard cases than the firm average). Legitimate objection component — the tool’s lower accuracy on non-standard claims is a real issue the programme can address through tool improvement, escalation protocols, or adjusted decision-support boundaries. Status-quo component — some of the stall is cultural or leadership-driven in ways that are not anchored in the tool’s specifics.”

Complete the diagnoses for all six signals. The diagnostic work is the hardest part of the lab; the responses that follow will only be as good as the diagnoses.

Exercise 3 — Design the response for each cause category (20 minutes)

For each of the four AI-specific cause categories that appeared in your diagnoses (replacement fear, opacity distrust, scar tissue, ethical objection), design the programme’s response. The response for each category should include:

  • The specific intervention the programme will run (not a general category — the specific action).
  • The owner of the intervention (who does it, who decides on scope, who holds it).
  • The cadence (one-time, recurring, or event-driven).
  • The success signal (how will the programme know the intervention has worked).
  • The escalation if the response does not move the signal.

For replacement fear specifically, the intervention is particularly sensitive. The CEO’s prior commitment expires in nine months; the programme either has to negotiate an extension or has to confront the expiry honestly. A response that quietly hopes employees will not notice the expiry is a response that produces a worse crisis when the expiry arrives. An example response might be: “Intervention — the sponsor will make an updated commitment three months ahead of the current commitment’s expiry, either extending the horizon or being honest about a changed posture; the programme will recommend the sponsor make the updated commitment publicly at the claims-adjuster all-hands in month six, not at the last possible moment. Owner — programme sponsor, with the practitioner drafting the proposed commitment text and the CEO authorising the public statement. Cadence — one-time major event, with follow-through communication cascades. Success signal — sentiment-survey movement on the ‘my role is valued’ item within sixty days of the statement. Escalation — if the sponsor is not willing to make an updated commitment, the programme escalates to the chief people officer because the commitment’s quiet expiry produces organisational damage the programme cannot recover from.”

Complete responses for the four cause categories you diagnosed.

Exercise 4 — Design the response for the status-quo-bias component (10 minutes)

Status-quo bias does not typically yield to rational argument; it yields to making the new state feel safer than the old. Design three specific interventions that address the status-quo-bias component of the resistance without trying to argue the bias away. Each should:

  • Work with the bias rather than against it (e.g., small initial commitments that build momentum rather than large demands).
  • Honour the employee’s experience of the change even where the content of the bias does not drive programme change.
  • Have measurable effects the programme can track.

An example intervention might be: “A two-week ‘shadow the tool’ programme where adjusters use the tool alongside their existing workflow without any expectation of replacing the old workflow; the tool’s output is compared with the adjuster’s decision purely as a learning exercise; the adjusters retain full decision authority throughout. The intervention works with status-quo bias because it requires no initial behaviour change, honours the adjuster’s professional judgment, and produces the lived evidence that many adjusters find persuasive in ways that advocacy is not.”

Exercise 5 — Package the playbook (10 minutes)

Compile your diagnoses and responses into a two-page playbook document the programme team will operate from for the next ninety days. The playbook should include:

  • An executive summary (three sentences) stating the diagnosis and the response approach.
  • The six signals with their classifications and cause diagnoses.
  • The responses organised by cause category.
  • The ninety-day operating cadence (weekly and monthly rhythms).
  • The escalation triggers to the sponsor and the fallback options if the ninety-day response does not move the adoption signal.

The playbook is the document you would hand to the programme lead for execution. It should be actionable without the practitioner needing to explain each section in person.

Debrief

The lab tests the diagnostic discipline the credential certifies. A well-run debrief compares how different practitioners classified the same signals and how different diagnoses led to different responses. The richest learning comes from the cases where practitioners converged on the diagnosis but designed different responses — surfacing that diagnosis alone does not determine response, and that multiple legitimate response pathways exist for the same diagnosis.

One final point of emphasis. The lab’s structure implies a tidy separation between cause categories. Real resistance is usually mixed. The practitioner who is honest about mixture and designs responses that address mixtures rather than pure cases produces better responses than the practitioner whose playbook has a single entry per cause category. The debrief should surface where the practitioner’s work recognised mixture and where it did not.


Q-RUBRIC self-score: 89/100

© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.