COMPEL Specialization — AITM-CMD: AI Change Management Associate Article 7 of 11
A training programme can produce high completion rates and no observable behaviour change, and this is the modal outcome of enterprise training when the design optimises for throughput rather than competence. The pattern is recognisable: every employee completes a one-hour e-learning module; the completion dashboard turns green; six months later adoption of the AI tool is below half of the target and the specialist behaviours the training was supposed to produce are visible in a minority of users. The practitioner-grade response is not to run more training of the same kind. It is to redesign the
The 70-20-10 frame
The durable frame for adult learning at work is 70-20-10, named after research on executive development by Morgan McCall, Robert Eichinger, and Michael Lombardo at the Centre for Creative Leadership in the 1980s and 1990s, summarised in McCall’s High Flyers.1 The frame observes that roughly seventy per cent of professional learning comes from experience — doing the work, making mistakes, recovering, iterating — twenty per cent from social learning — working with peers, coaches, mentors — and ten per cent from formal training — courses, books, structured content.
The frame is not a prescription that every programme must mix the three in exactly those proportions; real-world mixes vary. The frame is a warning: a programme that invests only in formal training is investing in the ten-per-cent channel and under-investing in the ninety per cent that produces most of the behaviour change. A practitioner who reviews a training plan and finds only e-learning modules and instructor-led sessions knows the plan is under-designed, regardless of how many modules there are.
On AI programmes, the under-investment is particularly costly. AI tool use is a practice, not a knowledge domain. An accountant does not learn to use a generative draft tool by hearing about it; the accountant learns by using it repeatedly against real tasks, receiving feedback, iterating, and eventually developing the professional judgment about when to use it, when not to, how to catch its errors, and how to integrate its output into the professional workflow.
Delivery modes and their trade-offs
Each delivery mode has a cost profile, a reach profile, and a depth profile. The practitioner-grade design selects from across the mode spectrum rather than defaulting to one.
E-learning — self-paced modules the learner completes on their own. Low per-learner delivery cost, high reach, shallow depth. Appropriate for awareness content, baseline knowledge, and compliance-required content. Insufficient for behaviour change on its own.
Instructor-led training — classroom or live-virtual sessions. Moderate per-learner delivery cost, moderate reach, moderate depth. Appropriate for concepts that benefit from real-time question-and-answer, shared context-setting, and initial practice. Insufficient for behaviour change without subsequent reinforcement.
Community of practice — ongoing peer-learning community with shared channels, recurring discussions, and visible leadership of expert peers. Low ongoing cost, targeted reach (participants), very high depth for participants. Appropriate for emerging practice, rapidly evolving content, and sustained skill-building.
Peer coaching — structured pairs or triads where peers coach each other against specific tasks. Low cost, moderate reach (requires capacity of the peer coaches), high depth. Appropriate for skill-practising in safe conditions without instructor dependency.
On-the-job learning with expert support — the learner does the work, with an expert available for real-time coaching. Higher cost per learner than e-learning, moderate reach, very high depth. Appropriate for the deep behaviour-change work where the learner integrates the capability into the professional workflow.
Simulations and labs — structured environments where the learner practises against representative tasks without production consequences. Moderate cost, targeted reach, high depth. Appropriate for tasks where production-environment practice is unsafe or expensive.
Accenture’s publicly documented AI upskilling programme — part of the firm’s large-scale investment in workforce AI literacy — illustrates the multi-modal approach, combining formal learning paths with guilds, mentor networks, and on-engagement application.2 The point is not that every organisation should copy Accenture’s specific programme; the point is that the programmes that have produced durable capability have typically combined the modes rather than chosen one.
[DIAGRAM: MatrixDiagram — delivery-mode-cost-by-depth — 2×2 with axes “Per-learner delivery cost (low/high)” and “Depth of behaviour change achieved (shallow/deep)”; delivery modes placed in quadrants with annotations; primitive teaches the design trade-off explicitly.]
Reinforcement — the part most training misses
Training without reinforcement decays quickly. The pattern is well-documented across learning research: knowledge recalled immediately after a session drops sharply within a week, drops further within a month, and stabilises at a low baseline without deliberate reinforcement. The practitioner’s response is to design reinforcement into the training plan from the beginning, not to bolt it on after the training has already failed.
Four reinforcement mechanisms are standard.
Spaced repetition. Short refresh sessions at increasing intervals after the initial training — a follow-up at one week, two weeks, one month, three months. The refreshes are not repetitions of the original content; they are application-focused, asking the learner to use the capability against a new task.
Application assignments. After the training, the learner is given specific tasks that require the capability, with a named reviewer and a timeline. The assignment converts the training from observed content to practised competence.
Manager reinforcement. Managers of the learners are given explicit coaching on what to observe and how to reinforce. A manager who asks, in the weekly one-to-one, “how did you use the assistant on this week’s work and what did you notice?” produces more reinforcement than any programme-office intervention.
Community reinforcement. The community of practice introduced above becomes the long-tail reinforcement channel — a place where learners share ongoing practice, see peer examples, and absorb the sustained context that no discrete training session can provide.
Measuring behaviour, not sessions
A training plan that measures completion rates and satisfaction scores is measuring the wrong things. A training plan that measures behaviour change is measuring the thing that matters. Four levels of measurement compose a practitioner-grade assessment, adapted from Kirkpatrick’s long-established levels with the behaviour-focus the AI context requires.
Level one — completion and satisfaction. Did the learner finish, and did they rate it positively? Useful for detecting gross problems but insufficient on its own.
Level two — knowledge acquisition. Can the learner, immediately after the training, demonstrate knowledge of the concepts and techniques? A short assessment provides the signal. Still insufficient on its own.
Level three — behaviour change. Is the learner, some time after the training, using the capability in their work? Observable through usage telemetry, through manager-observation, through work-sample review. This is the level most programmes fail to measure because it is harder than the first two.
Level four — business outcome. Has the behaviour change produced the business outcome the programme was commissioned to produce — higher quality, faster cycle time, better customer experience, lower cost per transaction? Observable through the business metrics the programme targeted.
[DIAGRAM: ScoreboardDiagram — training-outcome-dashboard — four levels (completion, knowledge, behaviour, business outcome) with leading and lagging indicators for each, plus a visible indicator of which levels the programme is actively measuring versus which are assumed; primitive makes the measurement discipline visible to sponsors.]
A programme dashboard that shows green at level one and does not measure levels three and four is a dashboard that cannot answer the question the sponsor actually has. The practitioner insists on the full stack.
A worked example — specialist curriculum design
Consider the practitioner designing the specialist-tier training programme for financial analysts adopting a generative research-assistant tool. The literacy segmentation from Article 5 has already named the target competencies. The training plan converts the competencies into a delivery sequence.
Week one carries a two-hour instructor-led session. The session opens with a worked example — an actual research question, taken through the tool with the instructor narrating the judgment calls. It introduces the tool’s capabilities and, critically, its failure modes in the specific research context. It concludes with a short assessment confirming the learner has absorbed the conceptual content.
Weeks two and three run an application assignment. Each analyst takes three assigned research tasks through the tool, produces output, and has the output reviewed by a designated senior analyst peer-coach. The peer-coach gives written feedback against a rubric the programme has published.
Week four opens the community of practice. The cohort joins a shared channel where patterns, prompts, and noticed failure modes are posted. The channel is facilitated by a senior specialist — not by the programme office — with a visible cadence of weekly summary posts.
Month three runs the first spaced-repetition refresh. A one-hour session focuses on patterns that have emerged across the cohort in the first two months, with an emphasis on the specific failure modes that have surfaced and the responses the community has developed.
Month six runs the behaviour assessment. A work-sample review — observed use of the tool on a representative task — produces a proficiency signal for each analyst. Analysts not yet at proficiency receive a targeted intervention; analysts at proficiency continue to the community-of-practice reinforcement cycle.
This is a training plan that takes behaviour change seriously. It is also more expensive than an e-learning module with a multiple-choice quiz. The cost difference is the cost of actual capability.
When formal training is the wrong answer
One final discipline is worth naming. Sometimes the right response to an adoption gap is not more training. If the employees know how to use the tool and are choosing not to, the gap is not a knowledge gap; it is a motivation, workflow, or resistance gap, and training will not close it. If the tool is materially worse than the existing workflow for the actual tasks employees do, the gap is a product gap, and training will not close it either. The practitioner’s discipline is to diagnose before prescribing — the diagnostic discipline taught in Article 4 applies here as much as to resistance generally. A training plan prescribed for the wrong gap is wasted investment.
Summary
Training for behaviour change requires the 70-20-10 frame — heavy investment in experiential and social learning, formal training as only one element. The delivery modes each have cost, reach, and depth profiles that the practitioner selects across. Reinforcement — spaced repetition, application assignments, manager coaching, community of practice — is designed in from the beginning. Measurement runs through four levels from completion to business outcome, with the practitioner insisting on the behaviour and outcome levels the sponsor needs. And sometimes the right response to an adoption gap is not training at all. Article 8 turns to role redesign and the human-AI collaboration patterns that determine what the job actually becomes when AI shifts the work.
Cross-references to the COMPEL Core Stream:
EATF-Level-1/M1.2-Art23-Training-and-Adoption-Plan.md— training and adoption plan artifact the programme producesEATF-Level-1/M1.6-Art05-Change-Management-for-AI-Transformation.md— change-management framework within which training is a discipline
Q-RUBRIC self-score: 89/100
© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.
Footnotes
-
Morgan W. McCall Jr., Michael M. Lombardo, and Ann M. Morrison, The Lessons of Experience: How Successful Executives Develop on the Job (Lexington Books, 1988); Morgan W. McCall Jr., High Flyers: Developing the Next Generation of Leaders (Harvard Business Review Press, 1998). ↩
-
Accenture, published case studies on AI upskilling programmes (2021-2024), https://www.accenture.com/us-en/case-studies (accessed 2026-04-19). ↩