COMPEL Specialization — AITM-CMD: AI Change Management Associate Lab 1 of 2
Lab brief
Meridian Wealth Partners is a fictional mid-sized wealth-management firm based on composite characteristics drawn from publicly reported financial-services AI programmes (including the FINRA and SEC public discussions of AI use in the investment-advisory industry through 2024). The firm has approximately 3,400 employees across three regions and operates within the EU single market, meaning EU AI Act Article 4 applies to its staff operating AI systems. The programme sponsor — the chief operating officer — has commissioned a sixteen-week design exercise to produce the firm’s AI literacy programme. You are the assigned AITM-CMD practitioner. This lab walks you through the design work for three specific personas.
Lab inputs
- An organisational headcount summary by function: investment advisory (840), portfolio management (220), research (140), operations (720), technology (380), compliance and legal (210), HR and people operations (190), client service (380), enabling functions (320).
- Two AI systems currently being deployed: a generative research-assistant tool available to investment advisors, portfolio managers, and researchers; an automated anti-money-laundering (AML) alert-triage tool used by operations and compliance.
- An excerpt from the firm’s risk register noting that the generative tool is classified internally as “medium-risk deployment” and the AML tool as “high-risk deployment” given its role in regulatory compliance and customer-impact decisions.
- The firm’s published acceptable-use policy for generative AI, which prohibits use of external generative tools for client data and permits only the approved in-house tool.
- A preliminary engagement survey showing that forty-two per cent of employees across the firm have experimented with generative AI tools personally; self-reported literacy is highly variable even within functions.
- The sponsor’s stated goal: “A programme that satisfies EU AI Act Article 4 defensibly and that actually helps our people work well with the tools — we want both, not a compliance-only exercise.”
Exercise 1 — Define three personas (15 minutes)
From the function list above, select three personas who represent distinct literacy needs and for whom the Article 4 duty is non-trivial. For each persona, capture (a) the function and rough headcount within the persona, (b) the AI systems they will operate, (c) the primary professional risk if literacy is insufficient, and (d) the tier on the role-tier framework from Article 5 (executive, manager, specialist, general employee, contractor).
A starting completed record for persona one might read: “Persona one — investment advisors (840 headcount, specialist tier). Systems operated — generative research-assistant tool in the client-meeting preparation workflow. Primary risk if literacy insufficient — advisor makes client-facing recommendations citing fabricated sources produced by the tool, creating both regulatory and fiduciary exposure. Tier — specialist.”
Complete two further personas from different functions. The three personas together should cover a range of tier, risk profile, and system-interaction patterns, because the lab’s pedagogical purpose is to demonstrate that one curriculum does not serve all three.
Exercise 2 — Specify role-appropriate curricula (25 minutes)
For each of your three personas, design the curriculum outline. Each outline should include:
- Five to eight learning modules (title and one-sentence description).
- The primary delivery mode(s) per module (e-learning, instructor-led, community of practice, peer coaching, on-the-job, simulation).
- The estimated total time commitment for the initial curriculum.
- The refresh cadence you would set (annual, semi-annual, event-driven).
- Two explicit links to EU AI Act Article 4 — specific pieces of the curriculum that directly address the Article 4 sufficiency duty for this persona’s context of use.
A starting completed record for investment advisors might include a module on “Hallucination Patterns in the Research-Assistant Tool — Domain-Specific Failure Modes” with instructor-led delivery of ninety minutes, because the hallucination patterns matter in specific ways for investment-research content that a general module on LLM hallucination does not cover. The Article 4 linkage would cite the module as evidence that the curriculum addresses “the persons on whom the AI systems are to be used” — clients whose investment decisions will be shaped by research summaries.
Complete the curricula for all three personas. Your completed curricula are the input to Exercise 3.
Exercise 3 — Specify proficiency targets and measurement (20 minutes)
For each of your three personas, define the proficiency target and measurement mechanism. The record for each persona should include:
- The proficiency target stated in observable terms (what the learner will be able to do, not what they will have attended).
- The measurement mechanism (knowledge check, work-sample review, structured conversation, manager-observed behaviour, scenario assessment).
- The evidentiary artifact that documents proficiency for Article 4 sufficiency defence (training records alone do not satisfy the duty; what additional artifact does?).
- The intervention triggered when proficiency is not demonstrated.
An example record for investment advisors might read: “Proficiency target — advisor can prepare a client-meeting research summary using the tool, identify at least one hallucination or unsupported claim in the tool’s output, and produce a client-ready summary that is substantively accurate. Measurement — work-sample review conducted by a senior advisor peer-coach using a published rubric, performed once within the first ninety days and then annually. Artifact for Article 4 sufficiency — the completed rubric with evidence of the advisor’s judgment, stored in the LMS and referenced against the advisor’s performance record. Intervention on non-demonstration — a second work-sample review after targeted coaching; persistent non-demonstration escalates to temporary suspension of advisor authority to use tool output in client-facing materials until proficiency is demonstrated.”
Complete the proficiency and measurement specifications for all three personas. Be particularly attentive to the evidentiary artifact question — a literacy programme that cannot produce evidence of sufficiency on audit cannot defend itself.
Exercise 4 — Design the sustainment mechanism (15 minutes)
For the firm as a whole (not per-persona), design the sustainment mechanism that will keep the programme current through eighteen months post-launch. The record should include:
- The refresh triggers (scheduled cadence plus event-driven triggers — name at least three events that would trigger an out-of-cycle refresh).
- The community-of-practice design (who facilitates it, what cadence, what content it is producing, how it is made visible).
- The feedback loop from the community back to the formal curriculum (how does the programme know which emerging failure modes to incorporate into the next refresh?).
- The governance arrangement (who owns the programme’s quality, who approves changes to the curriculum, what is the review cadence for the programme as a whole).
The sustainment design is often under-specified in literacy programmes, producing curricula that go stale within twelve months. The exercise tests whether the practitioner treats sustainment as a first-class design question rather than as an afterthought.
Exercise 5 — Article 4 defence memo (15 minutes)
Write a one-page memo to the sponsor stating how the designed programme defends against an Article 4 regulatory inquiry. The memo should include:
- The explicit statement of the duty as the firm interprets it.
- The segmentation decisions and their rationale (why these personas, why not others).
- The sufficiency argument for each persona (what makes the designed programme sufficient for this persona’s context of use).
- The evidence that would be produced on inquiry (training records plus additional artifacts).
- The gaps the practitioner honestly acknowledges — where sufficiency is not yet established and what is being done to close the gap.
The final bullet is the one that tests professional integrity. A practitioner who writes a memo claiming sufficiency everywhere without naming any gap is writing the wrong memo. Regulators and auditors trust memos that acknowledge limitations more than memos that claim none.
Debrief
The lab’s objective is demonstrating the segmentation-curriculum-proficiency-sustainment chain in action. A well-run debrief compares how different learners selected their three personas, what different design choices they made for the curriculum, and which proficiency measurement mechanisms they chose. The richest feedback surfaces the differences in Article 4 interpretation — learners who read the duty narrowly tend to produce thinner programmes than learners who read it more broadly, and both readings are defensible against specific regulatory guidance, so the conversation about interpretation is the conversation the credential wants to produce.
The practitioner habit the lab builds is to hold the regulatory duty and the practical transformation goal in the same frame rather than treating them as competing demands. A programme that satisfies Article 4 defensibly and that actually helps people work well with the tools is a better programme than one that satisfies either goal alone.
Q-RUBRIC self-score: 89/100
© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.