COMPEL Specialization — AITM-CMD: AI Change Management Associate Artifact Template 1 of 1
How to use this template
This template gives the structure of the AI Change Plan taught in Article 11. It is not a fill-in-the-blank form; each section requires substantive practitioner work. Target total length is twelve to fifteen pages. Update monthly for live sections (stakeholder, metrics, portfolio); quarterly for design sections (literacy, communication, training); at milestones for scope and rationale; retire cleanly when the programme transitions to business as usual. Every version is logged in the update log at the back with date, section, change, and rationale.
Executive summary (half page)
- Programme title: [name]
- Sponsor: [name, role]
- Change practitioner: [name, role]
- Programme horizon: [start date — target close date]
- Recommendation to sponsor (if the plan is proposing any decision): [go / wait / redesign / continue / descope, with one-sentence rationale]
The executive summary states, in three to four sentences, what the programme is, what the change work covers, and what the sponsor is being asked to approve or note. The summary is written last, after the sections below are drafted, so it can accurately represent them. No acronyms without expansion.
Section 1 — Programme scope and rationale (one page)
1.1 What this programme is
One paragraph naming the AI transformation this change work accompanies, the scope of the change practice within the overall programme, and the distinction between the change work and the project/programme-management work running in parallel.
1.2 The AI-specific dynamics this plan addresses
List the specific AI dynamics the plan is designed to address, drawing from the five named in Article 1 (existential replacement fear, model opacity, literacy variance, ethical concern, hybrid human-AI workflows). Note which of the five apply with particular intensity to this programme and why.
1.3 Explicit boundaries
State what this change plan does not cover — project delivery, technical implementation, governance framework design, or any other adjacent work — so the sponsor knows where the change practitioner’s accountability begins and ends.
1.4 Success definition
State, in two to three sentences, what success looks like for the change work specifically. Not the programme’s business outcome — the change outcome that will support the business outcome. This is the internal compass for the rest of the document.
Section 2 — Stakeholder landscape and sponsor assessment (two pages + influence-attitude map)
2.1 Stakeholder register
Table with columns: stakeholder name, role, category (sponsor / beneficiary / resistor / affected community), influence (low / medium / high), current attitude (opposed / neutral / supportive), required attitude, target date for movement, primary engagement practice.
2.2 Influence-by-attitude map
[Visual: 2x2 grid with influence on vertical axis and attitude on horizontal axis; stakeholders placed in quadrants; arrows showing required movements with dated endpoints. Refer to Article 2’s diagram primitive.]
2.3 Sponsor strength scoring
For each sponsor, score the four dimensions on the five-level scale (nascent / emerging / scaling / mature / transformational): visibility, budget authority, political capital, sustained engagement. Provide one sentence of evidence per dimension. Render the composite verdict: adequate, caution, or unsafe to proceed.
2.4 Sponsor engagement plan
Table of the specific sponsor-engagement practices the programme will run: sponsor communications (cadence, channel, content), sponsor-visible events (milestone announcements, town-hall appearances, one-to-ones), sponsor-coaching practices (how the practitioner supports the sponsor’s performance of the role).
2.5 Affected-community engagement plan
For the affected-community tier: name the communities, name the representative voice the programme will consult for each, name the consultation cadence. If no community representative is identified, the section explicitly says so and flags the gap as a programme risk.
Section 3 — Change-model approach (half page)
State which classical change model is the primary lens for each major segment of the programme. Typical structure:
- For individual role change (adjacent to specific tool adoption): ADKAR; diagnose individuals stuck at Awareness, Desire, Knowledge, Ability, or Reinforcement and intervene accordingly.
- For organisational-scale change (affecting operating model, strategy, culture): Kotter 8 Steps; build the coalition, craft the vision, run the cascade.
- For loss-heavy transitions (roles with meaningful identity or craft displacement): Bridges Transition Model; name the Ending, honour the Neutral Zone, support the New Beginning.
- For systemic reinforcement conversations with the sponsor: Lewin’s unfreeze-change-refreeze; frame the force field of reinforcing pressures and the need to alter them.
Name specific examples of where each model is applied in this programme, so the sponsor can see the choice being made rather than inferring a default.
Section 4 — Resistance diagnosis and response design (one page)
4.1 Anticipated and observed resistance signals
Table with columns: signal (specific operational description), visibility (visible / hidden), scope (individual / systemic), cause category (replacement fear / opacity distrust / scar tissue / ethical objection / status-quo bias / generic residual), legitimate-objection assessment (legitimate / bias / mixed).
4.2 Response design per cause category
For each cause category that appears in the signals, design the programme response using the frame: intervention, owner, cadence, success signal, escalation.
4.3 Diagnostic discipline
State the cadence and mechanism by which the programme refreshes its resistance diagnosis — how often the signals are re-assessed, who holds the diagnostic responsibility, how diagnoses are documented. A programme that does not refresh its resistance diagnosis is working from stale data.
Section 5 — AI literacy strategy (two pages)
5.1 Workforce segmentation
Table of the role tiers relevant to this programme (executive / manager / specialist / general employee / contractor) with headcount in each, systems operated, and the primary literacy question the tier must be able to answer.
5.2 Curriculum per tier
For each tier, the curriculum outline: modules with one-sentence description, delivery mode per module, total time commitment, refresh cadence.
5.3 Proficiency targets and measurement
For each tier, the proficiency target in observable terms, the measurement mechanism, the evidentiary artifact that documents proficiency for Article 4 sufficiency defence (where applicable), the intervention triggered when proficiency is not demonstrated.
5.4 Sustainment design
Refresh triggers (scheduled cadence plus event-driven triggers); community-of-practice design (facilitation, cadence, content, visibility); feedback loop from community back to formal curriculum; governance arrangement (ownership, approval of changes, review cadence).
5.5 EU AI Act Article 4 defence (if applicable)
If the organisation is subject to EU AI Act Article 4, include the explicit sufficiency argument: the duty as the organisation interprets it, the segmentation decisions and rationale, the sufficiency argument per tier, the evidence produced on inquiry, the acknowledged gaps and remediation plan.
Section 6 — Communication strategy (one page + channel-sequence timeline)
6.1 Audience segmentation
Table of audience segments (aligned to literacy segmentation in Section 5) with the primary message for each, the level of detail appropriate for each, the preferred channels for each, and the preferred feedback mechanisms for each.
6.2 Message architecture
State the three to four themes the programme will hold across the duration. Each theme is one sentence. The themes are the continuity of the communication; specific messages instantiate them.
6.3 Channel strategy
For each channel (email cascade, town halls, Slack/Teams, manager briefings, intranet, training sessions, one-to-one conversations, external stakeholder briefings, community of practice): primary audience, message type, cadence, owner.
6.4 Two-way feedback mechanisms
The four mechanisms named in Article 6: structured listening sessions, anonymous question inboxes, manager-roundup reports, direct measurement. Specify cadence, owner, and how the feedback visibly shapes the programme.
6.5 Misinformation-response protocol
Monitoring cadence, response authorities, response channels, threshold at which rumour elevates to formal communication response, truthfulness-primacy discipline commitment.
6.6 Message-sequence timeline
[Visual: horizontal timeline across the four programme phases (pre-launch, early execution, sustained execution, embedding) with the primary messages, channel mix, and feedback mechanisms for each phase.]
Section 7 — Training and enablement design (one page + delivery-mode matrix)
7.1 70-20-10 design per tier
For each role tier, specify the mix of experiential, social, and formal learning being designed. A training plan with only formal elements is flagged as under-designed.
7.2 Delivery mode selections
Table with columns: tier, module, delivery mode, rationale for mode selection.
7.3 Reinforcement mechanisms
The four mechanisms from Article 7: spaced repetition, application assignments, manager reinforcement, community reinforcement. Specify cadence, owner, and observable effects for each.
7.4 Four-level measurement plan
Measurement at completion, knowledge, behaviour, and business-outcome levels. Name the specific metrics, sources, cadence, and owners at each level. Explicitly note which levels the programme will measure versus assume; the latter category should be near-empty for a programme worth running.
7.5 Diagnosis before prescription
State the practitioner’s commitment to diagnose adoption gaps before prescribing additional training. If a gap turns out to be a motivation, workflow, or product problem, training will not close it, and the programme’s budget will have been misdirected.
Section 8 — Role redesign (one page + task-by-pattern matrix for affected roles)
8.1 Affected roles
List the roles materially affected by the AI deployment, with headcount and primary business function for each.
8.2 Collaboration-pattern assessment
For each affected role, the task-by-pattern matrix: task description, current time allocation, professional judgment level, quality signal, consequence profile, collaboration-pattern assignment (augment / assist / automate / arbitrate), target pattern if different, transition plan.
8.3 Employee engagement approach
State how employees are being engaged in the redesign: consultation on task decomposition, contribution to pattern decisions, shaping of the new role’s design. If the engagement is being skipped for any role, name which roles and why; the reasons are documented even when the engagement is skipped.
8.4 Documentation flow
Table of the downstream systems receiving the redesigned-role documentation: hiring specifications, performance-management framework, development plans, reward structures. Specify the owner and target date for each.
8.5 Fairness review
Document the fairness questions examined: which populations are disproportionately affected by the role-redesign decisions, how augmentation gains are distributed, how the decisions are recorded with their reasoning. If the fairness review is being skipped or deferred, name the reason and the risk.
Section 9 — Adoption metrics and reinforcement (one page + metric dashboard)
9.1 Metric dashboard
[Visual: three-column scoreboard with leading, lagging, and guardrail indicators; each indicator annotated with source, cadence, owner, and threshold. Refer to Article 9’s diagram primitive.]
9.2 Metric specifications
Table with columns: metric name, definition, source, collection method, refresh cadence, decision linkage (what decision the metric drives), owner.
9.3 Gaming anticipation
Name the gaming patterns the programme anticipates (performative usage, completion without competence, selection effects, cherry-picking in reporting) and the countermeasures for each.
9.4 Reinforcement mechanism design
The four mechanisms from Article 9: incentive alignment, visible leadership behaviour, community and social reinforcement, feedback loops that produce visible improvement. Specify the practices for each in this programme.
9.5 Reporting commitment
The practitioner’s commitment on honest reporting: metrics reported as defined in the charter, both positive and negative movements reported, guardrail triggers surfaced, external causes attributed where relevant, unexplained movements acknowledged.
Section 10 — Portfolio view and capacity management (one page)
10.1 Concurrent change portfolio
List every concurrent change initiative affecting the same populations this programme targets. For each: initiative, owner, timeline, primary populations affected, overlap with this programme’s populations.
10.2 Change-capacity indicators
The four indicators from Article 10: employee attention and time, manager bandwidth, sponsor attention, organisational recovery from prior change. Current read and trend for each.
10.3 Transformation-fatigue signals
The signal set from Article 10: change-related engagement survey items, cynicism in open-text feedback, voluntary-programme participation, high-performer attrition, manager-complaint volume. Current read and trend for each.
10.4 Decision trigger
State the trigger at which the programme recommends the sponsor consider the four portfolio options (proceed / slow / re-sequence / descope). The trigger is stated in advance so that when it fires, the conversation is a pre-agreed one rather than a surprise.
10.5 Capacity-management commitment
State the cadence and mechanism by which the portfolio view is refreshed and reported to the sponsor. A portfolio view that is produced once and filed is not a portfolio view.
Update log
Table with columns: version, date, section(s) changed, substantive change, change rationale, author. Every version of the plan is logged. The log is referenced in sponsor decisions made against the plan so the evidence trail is intact.
Retirement (completed when the programme transitions to business as usual)
When the programme ends, the plan is retired formally. The retirement note captures: what was institutionalised into standing practice (hiring, performance management, development, reward, governance), what was not institutionalised and why, the final sponsor-decision log, the artefacts preserved for future reference, the recommended review date for the institutionalised practices.
Q-RUBRIC self-score: 90/100
© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.