COMPEL Specialization — AITM-CMD: AI Change Management Associate Article 5 of 11
AI literacy used to be an optional investment. As of February 2025, under Article 4 of the EU AI Act (Regulation 2024/1689), providers and deployers of AI systems must ensure that their staff and other persons operating AI systems on their behalf possess a sufficient level of AI literacy — taking into account the persons’ technical knowledge, experience, education and training, the context of use, and the persons on whom the AI systems are to be used.1 The requirement is a legal duty, not a best-practice recommendation. Practitioners working for EU-regulated organisations, or for organisations whose AI systems touch the EU market, must design and sustain an AI literacy programme that meets the duty. Practitioners elsewhere face equivalent pressure from national guidance — Singapore’s IMDA has published an AI literacy initiative for public-sector workforce development2 — and from the increasing expectation that “trained workforce” is a reasonable baseline for any organisation operating AI systems at scale. This article teaches the practitioner to read the Article 4 duty accurately, to segment the workforce for literacy design, to build role-appropriate curricula, to measure achievement, and to sustain the programme as AI practice evolves.
Reading Article 4 correctly
Article 4 is short and specific. The duty is to ensure “a sufficient level of AI literacy” for staff and other persons operating AI systems. The duty is borne by both providers (those placing AI systems on the market) and deployers (those using AI systems under their authority). The duty is calibrated to the circumstances — the technical knowledge of the persons concerned, the context of use, and the persons on whom the AI systems operate. The duty does not specify a single curriculum, a single certification, or a single hour-count. It requires sufficiency for the circumstances.
Three practical consequences follow. First, “sufficient” is judged by context — a literacy programme that covers low-risk internal productivity tools is different from one that covers high-risk systems affecting customers. Second, the duty applies to operators of the systems, which in deployer organisations frequently includes business users, not only technical staff. Third, no single training programme satisfies the duty for every role; the duty requires segmentation.
A practitioner who reads the duty as “we need an AI training course for everyone” has misread it. A practitioner who reads it as “we need a differentiated literacy programme matched to roles, systems, and populations affected” has read it correctly. The curriculum is the output; the duty is the input.
The segmentation method
Segmentation is the first substantive design decision. Five role tiers capture the range of literacy needs in most organisations, with the proviso that the boundaries are illustrative — a practitioner calibrates to the specific organisation’s structure.
Executive tier. Board members, C-suite, function heads. Literacy for this tier concerns the strategic, ethical, governance, and regulatory dimensions of AI. It does not typically require deep technical content. The question an executive must be able to answer is “what does this AI system do, what could it go wrong at, who is accountable, and what are we obliged to do about it under law and policy?”
Manager tier. Directors, senior managers, team leads whose reports use AI systems. Literacy for this tier concerns operational oversight — when to escalate, how to coach the team, how to interpret adoption and quality signals, what the organisation’s policies say about AI use. Managers frequently under-receive literacy support because programme designers focus on executive messaging and end-user training, leaving the middle layer under-equipped to carry the change.
Specialist tier. Roles whose professional work is directly transformed by AI — analysts, accountants, lawyers, clinicians, engineers, marketers. Literacy for this tier concerns capable, critical use — how the system works in enough depth to use it well, how to spot output problems, what the system’s limits are in the specific professional context, how to integrate it into the professional workflow without degrading professional judgment.
General employee tier. Staff whose work is adjacent to AI systems or who use organisational AI tools (e.g., a chat-with-your-documents assistant, an AI-scheduling tool). Literacy for this tier is awareness-plus-basic-skill — what the tools do, how to use them safely, what the organisation’s acceptable-use policy says, where to report concerns.
Contractor and partner tier. External personnel who operate the organisation’s AI systems on its behalf, or who are permitted to use the organisation’s AI tools. Article 4 extends the literacy duty to “other persons operating AI systems on behalf of” the organisation, which means the practitioner’s literacy programme must reach this population even though it does not sit in the standard HR training infrastructure.
[DIAGRAM: OrganizationalMappingBridge — role-to-literacy-target — five-tier mapping across executive, manager, specialist, general employee, contractor; each tier annotated with primary literacy question, core content areas, and compliance-critical content under Article 4; primitive encodes the segmentation as a design artifact.]
Role-appropriate curricula
The segmentation drives the curriculum design, not the other way round. A curriculum assembled before the segmentation is complete will be over-scoped for some tiers and under-scoped for others. Four curriculum design principles apply across the tiers.
Principle one — concept before tool. Teach what the capability is (e.g., retrieval-augmented generation) before teaching how a specific tool implements it. Tools change; concepts move more slowly. A curriculum anchored to a specific vendor’s interface ages badly. A curriculum anchored to the underlying concept remains useful when the tool is replaced.
Principle two — context before content. Start each module with the use case or professional context in which the literacy matters. Accountants learning about generative-AI hallucination absorb the content better when the module opens with an example of a hallucinated source citation in a draft client memo, not when it opens with a definition of hallucination.
Principle three — critical use, not just capable use. Every curriculum at every tier includes explicit treatment of where the AI capability fails, where it should not be used, and what the professional’s responsibilities are for catching problems. A literacy programme that teaches only capable use produces confident employees who miss the cases the system gets wrong.
Principle four — anchored to the organisation’s policy, not to generic best practice. Generic “responsible AI” content is less useful than content that says “our organisation’s acceptable-use policy is X, our incident-reporting process is Y, our approved tools are Z, unapproved tools are not to be used because…” The literacy programme operates the organisation’s own policy, not a generic industry primer.
Proficiency targets and measurement
A literacy programme without proficiency targets cannot demonstrate sufficiency to a regulator or to a sponsor. The target is calibrated to the tier and to the risk exposure of the role.
For the executive tier, proficiency is typically demonstrated through completion of a short programme plus a scenario-based discussion — the executive can articulate the AI governance posture of the organisation, the regulatory landscape, the accountability chain, and the ethical considerations that bear on decisions at their level. A single multiple-choice exam does not demonstrate this proficiency; a structured conversation with evidence does.
For the manager tier, proficiency includes the executive elements plus operational demonstration — the manager can conduct a team conversation about AI tool use, can identify a quality signal in the team’s adoption metrics, can apply the organisation’s escalation policy to a concrete scenario.
For the specialist tier, proficiency is demonstrated at the professional-task level — the specialist can use the tool for the representative task at the professional standard, can identify the category of errors the tool is prone to in their specific domain, can produce evidence for a peer reviewer of appropriate critical engagement with the tool’s output.
For the general employee tier, proficiency is typically completion of the awareness curriculum plus a short knowledge check. The proportional investment matches the lower risk exposure.
For contractors and partners, the organisation defines proficiency equivalents and builds them into contracting, onboarding, and periodic reassessment.
[DIAGRAM: TimelineDiagram — multi-year-literacy-roadmap — horizontal timeline with quarterly milestones across a two-year horizon; five tier-lanes showing initial curriculum, proficiency demonstration, refresh cadence, and regulatory-update cycles; primitive encodes the programme as a sustained commitment rather than a one-off event.]
Sustaining the programme
AI practice evolves — new capabilities emerge, new risks are recognised, new regulations come into effect, new tools enter the organisation. A literacy programme that ships once and is not sustained goes stale within twelve months. Three mechanisms keep it current.
Refresh cadence. Tier by tier, the practitioner defines a refresh cadence — typically annual for the executive tier, annual or semi-annual for the specialist tier given the pace of change, annual for general employees. Refreshes are not repeats; they are updates focused on what has changed in the organisation’s tools, policies, and the external regulatory landscape.
Trigger events. In addition to the scheduled cadence, trigger events force out-of-cycle literacy updates — a new regulation taking effect, a new high-risk system being deployed, an incident with a literacy-related root cause. The practitioner builds the trigger taxonomy into the programme charter.
Community of practice. Sustainment is not only formal training. A community of practice — where specialists share emerging patterns, newly discovered failure modes, new prompts, new techniques — produces continuous informal literacy that formal training cannot match. The practitioner’s design includes the community-of-practice infrastructure alongside the curriculum.
A worked example
Consider a mid-size financial-services firm deploying a generative assistant for portfolio managers, and simultaneously using an automated credit-decisioning system for retail loans. The literacy programme the practitioner designs has distinguishable tracks.
For the executive tier, a short programme covers AI strategy, the organisation’s AI risk taxonomy (anchored to the firm’s own risk register), the relevant EU AI Act obligations including Article 4 specifically, and the accountability chain for decisions supported by both systems. Proficiency is demonstrated through a structured conversation with the general counsel and chief risk officer.
For portfolio managers — specialists — the programme covers the generative assistant’s capabilities and limits in the specific portfolio-management context, the hallucination patterns to watch for in research summaries, the organisation’s policy on using assistant output as input to client-facing recommendations, and the audit-trail requirements. Proficiency is demonstrated by a work-sample review where the manager’s use of the tool is observed against the professional standard.
For credit officers — specialists on a different system — the programme covers the credit-decisioning system’s reasoning, the override policies, the fair-lending implications of the model’s features, the reporting chain for suspected disparate-impact patterns, and the consumer-disclosure requirements. Proficiency is demonstrated by scenario-based review with compliance.
For general employees — tellers, call-centre staff, operations — the programme covers what the two systems do, when staff interact with them, what the acceptable-use policy permits, and where to report concerns. Proficiency is demonstrated by a short knowledge check.
The shape of the programme is differentiated, but the Article 4 duty is satisfied coherently: each population receives literacy sufficient for its role, anchored to the specific systems the organisation operates, calibrated to the context of use.
Summary
AI literacy is a legal duty under EU AI Act Article 4, and a strategic lever under any reasonable change-management frame. The practitioner’s job is to segment the workforce into role tiers, design curricula that teach concept before tool and critical use alongside capable use, define proficiency targets matched to the tier’s risk exposure, and sustain the programme through refresh cadence, trigger events, and a community of practice. A programme that ignores segmentation produces either compliance risk or training waste; a programme that practises segmentation well satisfies the duty and earns employee investment in the change. Article 6 turns to communication strategy, where the segmented audience map from this article directly feeds the message design.
Cross-references to the COMPEL Core Stream:
EATF-Level-1/M1.2-Art23-Training-and-Adoption-Plan.md— training-and-adoption-plan artifact that the literacy programme operationalisesEATF-Level-1/M1.6-Art02-AI-Literacy-Strategy-and-Program-Design.md— literacy strategy foundations extended here into segmentation practice
Q-RUBRIC self-score: 89/100
© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.
Footnotes
-
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), Article 4, https://eur-lex.europa.eu/eli/reg/2024/1689/oj (accessed 2026-04-19). ↩
-
Info-communications Media Development Authority of Singapore, AI literacy initiatives documentation (2023-2024), https://www.imda.gov.sg/ (accessed 2026-04-19). ↩