Skip to main content
AITE M1.4-Art12 v1.0 Reviewed 2026-04-06 Open Access
M1.4 AI Technology Foundations for Transformation
AITF · Foundations

Defining AI Literacy at Four Levels

Defining AI Literacy at Four Levels — Agent Governance & Autonomy — Advanced depth — COMPEL Body of Knowledge.

13 min read Article 12 of 48

COMPEL Specialization — AITE-WCT: AI Workforce Transformation Expert Article 12 of 35


A multinational manufacturer convenes a working group to design the organisation’s AI literacy programme. The group’s first decision point is what employees should actually know. One proposal argues for a single mandatory training for all 45,000 staff covering AI fundamentals, ethics, and safe use. A second proposal argues for a differentiated programme with technical depth for engineers, business-use content for office workers, and awareness-only content for plant staff. The working group resolves the disagreement by commissioning both. Six months later, neither satisfies the regulator preparing for EU AI Act Article 4 compliance review. The first proposal infantilised technical staff and left business users under-prepared for their specific risk surface. The second proposal was designed by function rather than by what each role actually needed to do with AI. Both proposals missed the underlying question — what level of literacy does each role need, not which function. This article teaches the expert practitioner to apply a four-level literacy taxonomy, to map roles onto levels based on AI interaction profile rather than functional category, and to avoid the two common design errors.

Why four levels

Literacy frameworks proliferate. UNESCO’s AI literacy framework, Stanford HAI’s AI literacy work, the Digital Education Action Plan of the European Commission, OECD PIAAC adult-competencies research, and many enterprise taxonomies each provide different cuts.12 The four-level COMPEL taxonomy is a synthesis optimised for workforce-transformation operational use rather than for academic comprehensiveness. Four levels provide enough resolution to differentiate meaningfully without proliferating administrative overhead. The four levels are general population, AI-user, AI-worker, and AI-specialist.

The key design principle behind the four-level structure is that the level is assigned based on the role’s AI interaction profile — how much, how often, and in what stakes the role’s holder interacts with AI systems — not on the functional category, seniority, or educational background of the role’s incumbents. A senior executive who makes board-level strategic decisions about AI but does not operate AI systems themselves sits at a different level than a back-office clerk who uses an AI triage tool daily. The general-to-specialist progression is orthogonal to organisational hierarchy.

The four levels defined

General population. The population that has incidental contact with AI systems operated by the organisation — as employees whose work is adjacent to but not operationally involved with AI, as customers of internal AI tools (e.g., an AI-powered HR chatbot), or as citizens affected by the organisation’s external AI outputs. Literacy content at this level covers what AI is, what the organisation uses it for, what employees’ rights are in relation to AI-supported decisions about them, how to report concerns, and how to distinguish AI-generated content from human-generated content in the organisation’s communications. Content is approximately two to four hours of structured learning plus periodic reinforcement. The general-population level is the baseline EU AI Act Article 4 literacy duty for the broadest population.3

AI-user. Employees who operate AI systems in their work but are not professionally responsible for the systems’ outputs in a technical sense. Office workers using AI-drafting tools, customer-service representatives using AI-triage systems, marketers using AI-content tools, salespeople using AI-recommendation tools. Literacy at this level adds: specific use of the tools the role actually uses, identification of failure modes, correct escalation practice, data-handling obligations, and the organisation’s acceptable-use policies. Content is approximately eight to sixteen hours structured across foundational and tool-specific modules. A substantial majority of knowledge workers in most AI-adopting organisations fit the AI-user level.

AI-worker. Employees whose professional work is meaningfully transformed by AI and who are responsible for the quality and consequences of outputs produced with AI assistance. Analysts, accountants, lawyers, clinicians, engineers, consultants, compliance officers. Literacy at this level adds: understanding of underlying model types and their characteristic failures, critical evaluation of AI outputs for the professional context, integration of AI into the professional workflow without degradation of professional judgment, governance obligations specific to professional use, and documentation requirements. Content is approximately thirty to seventy hours with substantial applied practice and profession-specific assessment.

AI-specialist. Employees whose role is to design, build, govern, or audit AI systems — data scientists, ML engineers, AI product managers, AI risk officers, AI governance leads, AI auditors, AI educators. Literacy at this level adds: deep model-type understanding, training and evaluation methodology, safety and alignment techniques, regulatory and standards interpretation in depth, organisational-design decisions affecting AI, and the capability to teach and assess others. AI-specialist literacy is effectively a professional qualification and typically aligns with formal credentials (COMPEL’s AITF/AITP/AITGP/AITL tiers, and the AITM/AITE/AITB specialisations, are structured to support this population). Content is hundreds of hours, frequently over years.

[DIAGRAM: ConcentricRingsDiagram — four-level-literacy-taxonomy — concentric rings from the outermost (general population) inward to AI-user, AI-worker, and AI-specialist at the centre. Each ring annotated with population share in typical AI-adopting organisations, content hours, assessment approach, and primary regulatory reference. Primitive teaches the four-level taxonomy as a nested progression.]

Mapping roles to levels

Level mapping is operational work done at the role-family level, not employee level. Three steps structure the mapping.

Step one — identify the AI interaction profile for the role. What AI systems does this role interact with? At what frequency? What stakes attach to the interactions? The interaction profile draws on the role-exposure work of Article 4 and the specific systems in use. A role whose incumbent interacts with an AI-drafting tool daily at low stakes is in a different profile than a role whose incumbent supervises an AI credit-decision system at high stakes.

Step two — place the role on the four-level taxonomy. Using the interaction profile, the role is placed at general-population, AI-user, AI-worker, or AI-specialist. Borderline cases — roles that interact substantially but at low stakes, or rarely but at high stakes — resolve by deferring to the higher level. The literacy investment for a borderline role at the higher level is a small incremental cost relative to the risk of under-preparation.

Step three — document the mapping as evidence. The mapping becomes part of the compliance-grade literacy evidence architecture (Article 16). For each role family the evidence record names the role, the AI interaction profile, the assigned level, the rationale, and the date of the assessment. The mapping is reviewed annually or when the role’s AI interaction profile materially changes.

The two common design errors

Infantilising the workforce. Mandatory general-population content delivered uniformly to AI-specialist populations produces disengagement and programme disrepute. The symmetric error — general-population content pitched at AI-worker depth — produces failed completion and disengagement from the other direction. Level-appropriate design avoids both errors. The OECD PIAAC skills-assessment data illustrates how broadly adult skills are distributed; designing literacy to a single level assumes uniformity that is not present.4

Overshooting what the role requires. Well-meaning programme designers sometimes target higher levels than roles require, reasoning that “more is better”. Over-targeted content produces time-cost, completion friction, and eventually programme pullback when completion rates fall. Target the level the role requires; extend voluntarily for employees whose personal aspirations exceed the role requirement.

Integration with regulatory and standards frameworks

Multiple regulatory and standards frameworks anchor to the four-level structure either directly or by analogous distinctions. The EU AI Act Article 4 literacy duty is calibrated to the technical knowledge, experience, education, and training of persons operating AI systems; the four-level structure is a practical operationalisation of that calibration.3 ISO/IEC 42001 Clauses 7.2 on competence and 7.3 on awareness map straightforwardly to AI-specialist and AI-worker levels on the one hand and AI-user and general-population levels on the other.5 NIST AI Risk Management Framework GOVERN 2.2 on training applies across all four levels with calibrated depth.6

Sectoral regulators frequently impose sector-specific literacy requirements that interact with the four-level structure. Financial regulators (FCA, BaFin, FINMA, SEC) have issued AI-related workforce-capability guidance; healthcare regulators and medical licensing bodies specify clinician-level AI expectations; legal professional bodies specify lawyer AI-use expectations. The practitioner maps sector-specific requirements onto the four-level structure rather than constructing a parallel structure for each regulator.

[DIAGRAM: Matrix — level-to-regulatory-mapping — rows: four levels. Columns: EU AI Act Article 4 interpretation, ISO/IEC 42001 clause mapping, NIST AI RMF GOVERN 2.2 application, representative sector-specific requirements. Primitive teaches the four-level taxonomy as a regulatory-bridging structure.]

Platform and delivery neutrality

Literacy at all four levels is delivered across mixes of platforms. Docebo, Cornerstone, Workday Learning, SAP SuccessFactors Learning, Open edX, and Moodle are the common LMS platforms carrying structured curriculum administration. Content suppliers include Coursera for Business, edX, Udacity, LinkedIn Learning, Stanford HAI, MIT CSAIL, Hugging Face Learn, and many others. Academy-internal content (as COMPEL provides) supplies the methodology-specific layer. The AITE-WCT practitioner maintains platform neutrality and does not tie literacy design to any single delivery choice. Singapore’s SkillsFuture Singapore national-level content and the UK NHS AI Lab sector-level content are public-sector references that demonstrate how literacy programmes operate in multi-platform environments.78

Level progression and lifecycle transitions

Level assignment is not static. Employees move between levels over their tenure, and the literacy infrastructure must support the transitions.

An AI-user who becomes part of the AI governance committee — taking on responsibility for decisions about AI systems in their business — transitions from AI-user to AI-worker and requires additional literacy to discharge the new responsibility. An AI-worker who takes on AI product-ownership responsibility transitions towards AI-specialist. A general-population employee who takes on a role that operates an AI-triage tool transitions from general-population to AI-user. In each case the transition is a literacy event that requires supplemental curriculum, not a one-off notification.

Lifecycle transitions also operate in the other direction. An AI-worker who moves into a non-AI-adjacent role may legitimately transition back to AI-user or general-population literacy expectations. Literacy is a continuing obligation proportional to the role; it is not a one-time acquisition.

The HRIS (Workday, SAP SuccessFactors, Oracle HCM, ADP, UKG, BambooHR) is the authoritative record of role changes and should trigger literacy-transition actions. The LMS (Docebo, Cornerstone, Workday Learning, SAP SuccessFactors Learning, Open edX, Moodle) serves the supplemental curriculum. Qualtrics, CultureAmp, Peakon, or Glint-based sentiment measurement monitors the transition experience. Talent-marketplace platforms (Gloat, Fuel50, Eightfold, 365Talents) integrate with role-change events to surface literacy-transition curriculum automatically.

Badly-managed lifecycle transitions are a silent cause of literacy erosion. An organisation that does not trigger supplemental literacy on role change accumulates a population of employees whose current level assignment does not match their actual role, which is a compliance-evidence weakness (Article 16) and an operational risk.

A documented applied example

A documented cross-industry pattern illustrates level-appropriate design. For AI-user populations in knowledge-worker contexts, organisations consistently find that tool-specific modules combined with foundational content outperform pure foundational content at producing applied use. For AI-worker populations in professional contexts, organisations find that profession-specific applied modules co-designed with professional practice leaders outperform generic professional content. For AI-specialist populations, organisations find that multi-month structured programmes with apprenticeship-like structure outperform short courses. McKinsey’s Superagency in the workplace (January 2025) and BCG’s AI at Work 2025 provide cross-industry evidence along these lines.910

Special populations requiring calibration

Three populations routinely require explicit taxonomy calibration beyond the standard four-level mapping.

Board directors and senior officers. Board-level literacy is frequently pitched at AI-user level when the actual need is closer to AI-worker level — the decisions directors make about AI investment, risk oversight, and strategic direction require understanding beyond operational tool use. Board literacy curricula typically run thirty to fifty hours over six to nine months with structured board-simulation exercises. The UK Financial Reporting Council, US SEC, and equivalent regulators have signalled increasing expectations on director AI-oversight capability.

Union representatives and works-council members. Representatives participating in AI-related consultation benefit from literacy calibrated to their representation role — sufficient understanding to assess proposals, raise informed concerns, and hold the organisation accountable. The calibration sits between AI-user and AI-worker; content is typically co-designed with the representative bodies. Germany’s IG Metall works-council training programmes provide reference patterns.

Contingent workforce. Contractors, consultants, and staffing-agency workers operating AI systems on the organisation’s behalf fall under EU AI Act Article 4’s literacy duty but often sit outside standard onboarding infrastructure. Literacy for this population requires contract-specified baseline plus organisation-specific orientation before system access. The failure mode is contingent workers accessing AI systems without the literacy the duty requires; the remediation is operational discipline during access provisioning.

Expert habits around literacy taxonomy

Two habits separate expert from journeyman application of the taxonomy.

Refusing single-level default. Sponsor pressure to pick one level and broadcast is reliably misguided. The expert practitioner defends the four-level structure against pressure to collapse it, because the single-level broadcast fails at the individual level for most of the workforce.

Revising mappings on evidence. Role-to-level mappings are reviewed annually. Incidents, audit findings, and sentiment feedback inform revisions. A mapping that was correct at launch may be wrong two years later as the role’s AI interaction profile evolves; mapping discipline requires ongoing attention.

Sentiment measurement through Qualtrics, CultureAmp, Peakon, or Glint provides signal on level-appropriateness — employees whose level assignment feels too low express frustration; employees whose assignment feels too high express overload. Both signals inform mapping revision.

Summary

The four-level AI literacy taxonomy — general population, AI-user, AI-worker, AI-specialist — differentiates literacy expectations by AI interaction profile rather than by functional category. Each level has distinct content hours, assessment depth, and regulatory alignment. Role-to-level mapping follows three steps and documents as compliance-grade evidence. Two common design errors — infantilising and overshooting — are systematically avoidable through level-appropriate design. The taxonomy bridges EU AI Act Article 4, ISO/IEC 42001 Clauses 7.2 and 7.3, NIST AI RMF GOVERN 2.2, and sectoral regulators. Platform-neutral delivery preserves the organisation’s long-term flexibility. Article 13 now takes the taxonomy into role-specific curriculum design — translating the level into what each role actually learns.


Cross-references to the COMPEL Core Stream:

  • EATF-Level-1/M1.6-Art02-AI-Literacy-Strategy-and-Program-Design.md — literacy-programme design anchor
  • EATF-Level-1/M1.5-Art13-Understanding-the-EU-AI-Act-Foundations-for-Governance.md — EU AI Act foundations context
  • EATE-Level-3/M3.2-Art06-Talent-Strategy-at-Enterprise-Scale.md — enterprise talent strategy that literacy supports

Q-RUBRIC self-score: 90/100

© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.

Footnotes

  1. UNESCO, “AI and Education: Guidance for Policy-Makers” (2021, updated 2023), https://unesdoc.unesco.org/ark:/48223/pf0000376709 (accessed 2026-04-19).

  2. Stanford Human-Centered AI Institute (HAI), “AI Literacy” resources, https://hai.stanford.edu/ (accessed 2026-04-19).

  3. Regulation (EU) 2024/1689 (“EU AI Act”), Article 4 — AI Literacy, https://eur-lex.europa.eu/eli/reg/2024/1689/oj (accessed 2026-04-19). 2

  4. OECD, “Programme for the International Assessment of Adult Competencies (PIAAC)”, https://www.oecd.org/skills/piaac/ (accessed 2026-04-19).

  5. ISO/IEC 42001:2023, Clauses 7.2 (competence) and 7.3 (awareness), https://www.iso.org/standard/81230.html (accessed 2026-04-19).

  6. National Institute of Standards and Technology, “AI Risk Management Framework 1.0” (NIST AI 100-1, January 2023), GOVERN 2.2, https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf (accessed 2026-04-19).

  7. SkillsFuture Singapore, https://www.skillsfuture.gov.sg/ (accessed 2026-04-19).

  8. UK NHS AI Lab, https://transform.england.nhs.uk/ai-lab/ (accessed 2026-04-19).

  9. McKinsey Global Institute, “Superagency in the workplace” (January 2025), https://www.mckinsey.com/mgi/our-research (accessed 2026-04-19).

  10. Boston Consulting Group, “AI at Work 2025”, https://www.bcg.com/publications/2025/ai-at-work-2025 (accessed 2026-04-19).