Skip to main content
AITE M1.4-Art53 v1.0 Reviewed 2026-04-06 Open Access
M1.4 AI Technology Foundations for Transformation
AITF · Foundations

Lab 3 — Design a Role-Specific AI Literacy Curriculum

Lab 3 — Design a Role-Specific AI Literacy Curriculum — Technology Architecture & Infrastructure — Advanced depth — COMPEL Body of Knowledge.

7 min read Article 53 of 48

COMPEL Specialization — AITE-WCT: AI Workforce Transformation Expert Lab 3 of 5


Lab objective

Design AI literacy curricula for four role archetypes at specified levels in the four-level taxonomy, and specify the compliance-grade evidence architecture that captures completion, assessment, and re-certification in a form that satisfies EU AI Act Article 4 and ISO/IEC 42001 Clauses 7.2 and 7.3.

Prerequisites

  • Completion of Articles 12 (four-level taxonomy), 13 (role-specific curriculum), 15 (measurement), and 16 (compliance-grade evidence) of this credential.
  • Familiarity with Template 3 (Role-Specific Literacy Curriculum Design Template).

The four role archetypes

The same Northstar Banking organisation from Lab 1 provides the context. The four archetypes represent the span the curriculum must cover.

Archetype 1 — Retail Contact Centre Agent (approx. 800 incumbents)

Role profile: frontline customer-facing, moderate AI touchpoint (chatbot triage, knowledge-base retrieval, auto-notes). Required literacy level: AI-user (operates AI tools with awareness of limitations, knows when to escalate).

Archetype 2 — Commercial Underwriter (approx. 180 incumbents)

Role profile: professional knowledge worker, heavy AI touchpoint (draft-generation assistant, research assistant). Required literacy level: AI-worker (uses AI as integral part of professional work; exercises professional judgment over AI outputs; recognises failure modes specific to the work).

Archetype 3 — Credit Risk Modeller (approx. 40 incumbents)

Role profile: technical specialist, builds and evaluates AI models in the risk function. Required literacy level: AI-specialist (substantive technical understanding of AI systems, capable of evaluating and designing AI for the organisation).

Archetype 4 — Branch Manager (approx. 200 incumbents)

Role profile: line manager with AI-touching team, no direct heavy AI use personally. Required literacy level: AI-user with manager extension (per Article 28, managers sit approximately one level above the teams they coach; for this population, AI-user level plus the manager-specific content of Article 28).

Step-by-step method

Step 1 — Learning outcomes per archetype (15 minutes)

For each archetype, specify 4–6 learning outcomes the curriculum must produce. Learning outcomes are specific, observable, and assessable. “Understand AI” is not a learning outcome; “identify three common failure modes of the draft-generation assistant and describe the verification step for each” is.

Reference the level definitions in Article 12 to calibrate outcome depth per archetype.

Step 2 — Content modules per archetype (25 minutes)

For each archetype, design 4–6 content modules. For each module, specify:

  • Module title.
  • Duration (typical: 30–90 minutes per module).
  • Key content points (3–5 bullets).
  • Applied exercise or practice activity.
  • Assessment approach.

The curriculum for the Contact Centre Agent is typically 3–4 hours total; for the Commercial Underwriter, 6–8 hours; for the Credit Risk Modeller, 20–30 hours (reflecting the specialist level); for the Branch Manager, 5–7 hours (the user-level curriculum plus 2–3 additional manager-specific modules).

Step 3 — Delivery modalities (10 minutes)

For each archetype, choose delivery modalities appropriate to the role and the content. Options include: self-paced online; live virtual cohort; in-person cohort; applied shadowing; on-the-job coaching; combination. Justify each choice against the pedagogy requirements of the level.

Note: the curriculum must be deliverable across more than one LMS/LXP platform per Article 14. Specify at least two possible platform combinations (e.g., Docebo + Coursera for Business; or Moodle + Udacity; or Cornerstone + LinkedIn Learning).

Step 4 — Assessment design (15 minutes)

For each archetype, specify the assessment design:

  • Assessment type (multiple-choice, scenario-based, applied task, observation, combination).
  • Item pool source and review pathway.
  • Cutscore setting method (Angoff / bookmark / mastery-learning for AI-user).
  • Target first-attempt pass rate and rationale.
  • Re-certification cadence per Article 17.

Step 5 — Evidence architecture (15 minutes)

Specify the evidence architecture that captures, for all four archetypes, the seven-field schema from Article 16:

  • learner_id
  • role_code_at_completion
  • literacy_level_required
  • module_id
  • module_version
  • completion_date
  • assessment_score_and_outcome

Name the source-of-record systems:

  • HRIS system (Workday / SAP SuccessFactors / Oracle / ADP — your choice; justify).
  • LMS/LXP system (per Step 3).
  • The integration pattern between them.

Describe the role-to-level map maintenance process:

  • Owning function.
  • Update cadence.
  • Reconciliation job against HRIS role inventory.
  • Approval and audit trail for role-level assignments.

Describe the re-certification operationalisation: the rolling cadence, the expiry dashboard owner, the escalation path for overdue cohorts.

Step 6 — Works-council readiness check (10 minutes)

Apply the works-council readability check from Article 27: review your curriculum and evidence specification for plain-language accessibility, proportionality documentation, fairness documentation, and privacy-impact documentation. Name any gaps you would address before formal consultation.

Deliverable

A curriculum pack with:

  • Learning outcomes for all four archetypes (approx. 1 page).
  • Content module specifications for all four (approx. 4–6 pages).
  • Delivery modality and platform choices (approx. 1 page).
  • Assessment design (approx. 2 pages).
  • Evidence architecture specification (approx. 3 pages).
  • Works-council readiness check (approx. 1 page).

Total: 12–18 pages.

Scoring rubric

CriterionPointsEvidence
Learning outcomes are specific and calibrated to level15Step 1 output
Content modules are appropriate in scope, duration, and pedagogy20Step 2 output
Delivery modalities are platform-diverse and justified10Step 3 output
Assessment design is compliant (cutscore method, first-attempt pass rate, re-certification cadence)15Step 4 output
Evidence architecture specifies all seven fields and integration20Step 5 output
Works-council readability check identifies real gaps, not just ceremony10Step 6 output
Overall coherence across archetypes10Full pack
Total100

Passing standard: 75 points.

Worked example — partial reference

Archetype 1 — Contact Centre Agent, partial curriculum outline:

  • Learning outcomes: identify the three AI tools used in the agent role; describe the role of the chatbot in call triage and its failure modes; describe the role of the knowledge-base retrieval assistant and the verification step for high-stakes information; describe the role of the auto-notes feature and the human review requirement; explain when to escalate an AI-related concern and to whom.

  • Modules:

    • M1: The three tools in your work (30 min; self-paced + video demo).
    • M2: Knowing when the chatbot is wrong (45 min; scenario-based; live virtual cohort).
    • M3: Verifying information the knowledge-base surfaces (45 min; applied practice on sample calls).
    • M4: Reviewing your call notes (30 min; applied; peer review).
    • M5: Escalation practices (30 min; scenario-based).
  • Delivery: mixed — self-paced for M1 and M4; live virtual cohort for M2, M3, M5. Platform combination: Docebo (corporate LMS) + LinkedIn Learning (licensed content for M1 video) as one option; Cornerstone + Coursera for Business as alternative.

  • Assessment: 20-item scenario-based instrument at completion of M5; cutscore set via modified Angoff with a panel of three senior agents and the training manager; target first-attempt pass rate 82%; re-certification every 24 months plus event-triggered refresh on material tool change.

Expected depth: similar level across all four archetypes.

Lab discussion questions

  • Which archetype was hardest to calibrate (too much content? too little?)? Why?
  • Where did the delivery modalities differ most across archetypes, and what does that tell you about the underlying work?
  • Which of the seven evidence fields was least well-covered by your source systems? What would you do to close the gap in the real organisation?
  • Did the works-council readability check surface anything that required the curriculum to change, or only the documentation?

Connection to other labs

This curriculum is the input to Lab 5 (role redesign) where the Commercial Underwriter’s redesigned role specification requires the AI-worker literacy curriculum to be credible.

Quality rubric — self-assessment of lab

DimensionSelf-score (of 10)
Applied-practice depth9
Fidelity to credential content (Articles 12, 13, 14, 15, 16)10
Scaffolding (6 steps progress logically)9
Assessment (rubric operational)10
Transferability (usable for real curriculum design)10
Weighted total48 / 50