COMPEL Specialization — AITE-WCT: AI Workforce Transformation Expert Lab 5 of 5
Lab objective
Produce a complete redesigned role specification for the commercial-underwriter role introduced in Lab 2, and the works-council engagement pack that introduces the redesign for formal consultation. The deliverable pair is the capstone artefact of the redesign and engagement workstreams.
Prerequisites
- Completion of Labs 1, 2, 3, and 4 of this credential.
- Completion of Articles 24 (task decomposition), 25 (role specification), 26 (redundancy planning), 27 (works-council engagement), 28 (manager enablement), 29 (performance evaluation) of this credential.
- Familiarity with Template 4 (Redesigned Role Specification Template).
Context
Back to Northstar Banking from Labs 1–3. The commercial underwriter role, analysed in Lab 2, is one of the roles in the Year 1 redesign scope. The Year 1 transformation programme (chartered in Lab 1) has reached the point at which role redesign is formally underway; the literacy programme (Lab 3) is in Year 1 rollout; the change-methodology combination (Lab 4 pattern, adapted) is in place.
The commercial-underwriter population: 180 incumbents across three geographies (Amsterdam, Frankfurt, Brussels). Tenure average 8 years. Two works councils are involved (Dutch ondernemingsraad covering the Amsterdam population; German Betriebsrat covering Frankfurt; Belgian conseil d’entreprise covering Brussels); plus the EU-level European Works Council covering the group.
The AI draft-generation tool has moved past pilot and is going into production for all commercial-underwriting work. The redesigned role shifts the underwriter from primary drafting to review, judgment, client dialogue, and decision. Some volume increase is expected (because drafting is faster), and the quality expectation rises in parallel.
Redundancy is not contemplated for this population in Year 1 — the redesigned role needs approximately the same headcount as the current role, though the geographic distribution may shift over time.
Step-by-step method
Step 1 — Produce the redesigned role specification (75 minutes)
Using Template 4 (Redesigned Role Specification Template), populate all ten sections from Article 25.
- Section 1 — Role title and identity. The title shifts (from “Commercial Underwriter” to, e.g., “Commercial Underwriting Review Specialist” or “Senior Commercial Underwriter, AI-Augmented” — your choice, justified). Role identity paragraph.
- Section 2 — Core responsibilities. Five to seven responsibilities expressed as outcomes.
- Section 3 — AI touchpoints. System name, tasks used for, decision authority (draft / review / approve / override), review cadence, escalation path. This is the most important section; be specific.
- Section 4 — Skills and capabilities. Must-have / strong-preference / developable; reference to literacy level.
- Section 5 — Task composition. Summary from Lab 2 aggregation.
- Section 6 — Performance expectations. Outcomes (not activities) with evidence types; attribution approach (Article 29) named.
- Section 7 — Reporting, collaboration, and authority. Who the role reports to, etc.
- Section 8 — Growth and career path. Realistic next roles.
- Section 9 — Transition plan. The 180 incumbents’ path from current role to new role; 60–180 day timeline.
- Section 10 — Governance and review. Standing review cadence and out-of-cycle triggers.
Use plain-language calibration throughout (Article 27 readability requirement).
Length: 6–10 pages.
Step 2 — Prepare the works-council engagement pack (45 minutes)
Using the nine-section pack structure from Article 27, prepare the engagement pack that accompanies the redesigned role specification into works-council consultation.
- Executive summary (2–3 pages).
- Detailed scope and population (1 page): the 180 incumbents by geography and tenure band (anonymised).
- Measures proposed (2 pages): the role redesign; the AI-tool deployment; training investment; performance-system change; transition support.
- AI-system description (2–3 pages): what the draft-generation tool does, what data it processes, governance, risks, assurance programme. Plain language with technical appendix available.
- Employee-data handling (1 page): GDPR compliance; specific data the tool processes; retention; access.
- Training and support (1 page): literacy curriculum (Lab 3 output); manager enablement; transition coaching.
- Alternatives considered (1 page): three genuine alternatives to the proposed approach and why the proposed approach was selected.
- Timeline and decision points (1 page): consultation schedule across the three national councils and the EWC; decision milestones; implementation phases.
- Questions invited (0.5 page): specific invitation for the councils’ input and the process for addressing it.
Total pack length: approximately 12–15 pages (including the role specification from Step 1 as annex).
Deliverable
Two documents (which may be bound together for the consultation):
- The redesigned role specification (Step 1 output).
- The works-council engagement pack (Step 2 output).
Combined artefact: 18–25 pages.
Scoring rubric
| Criterion | Points | Evidence |
|---|---|---|
| Role specification is structurally complete (all 10 sections) | 15 | Step 1 |
| Section 3 (AI touchpoints) specifies decision authority, review cadence, and escalation path | 15 | Step 1 |
| Section 6 (performance expectations) addresses attribution per Article 29 | 10 | Step 1 |
| Section 9 (transition plan) is Bridges-informed and realistic | 10 | Step 1 |
| Pack is works-council-readable (plain language, specialist depth in appendix) | 10 | Step 2 |
| ”Alternatives considered” section names real alternatives and reasoning | 10 | Step 2 |
| AI-system description covers governance, risks, assurance | 10 | Step 2 |
| Timeline accommodates the multi-jurisdiction consultation properly | 10 | Step 2 |
| Pack is coherent with the broader programme charter (Lab 1) and literacy programme (Lab 3) | 10 | Full artefact |
| Total | 100 |
Passing standard: 75 points.
Worked example — partial reference
Section 3 — AI touchpoints (partial):
| System | Tasks used for | Decision authority | Review cadence | Escalation path |
|---|---|---|---|---|
| Underwriting draft-generation assistant v2.1 | Initial memo drafting from application data | Review and approve/override; override required when application has red-flag indicators; approval recorded with reasoning | Each draft reviewed by the Underwriting Review Specialist; sampled quality review by peer monthly; model performance reviewed by the AI Governance function quarterly | AI output anomaly or repeated pattern → AI Governance Escalation Desk within 4 working hours |
| Research assistant (industry-sector context) | Sector background research for individual applications | Use and cite; verify sources before inclusion in memo | Self-check on each use | Concern about research quality → Senior Underwriter + AI Governance |
Section 6 — Performance expectations (partial):
Performance expectations are outcomes, not activities. Attribution: the Specialist’s contribution is measured on (a) the quality of the approved underwriting decisions over 6-month rolling windows; (b) the judgment-applied ratio (fraction of AI drafts materially adjusted before approval); (c) the exception rate (applications the Specialist escalates rather than approving within delegated authority); (d) client-relationship metrics (response time, client feedback).
Volume metrics are tracked but not primary performance indicators. A Specialist producing high volume with a near-zero judgment-applied ratio is not performing the role; a Specialist with lower volume and a substantive judgment-applied ratio is performing well.
Expected depth: similar across all ten sections and the pack’s nine sections.
Lab discussion questions
- Which section of the role specification was hardest to write in plain language?
- In Section 9 (transition plan), how did you treat the Ending for the 180 incumbents — what specifically is ending, and how is it honoured?
- In the works-council pack’s “alternatives considered” section, which alternatives did you consider genuinely? Were there alternatives the proposed plan genuinely outperforms, or was the proposed plan the only real option?
- If the German Betriebsrat came back with a specific concern about the research-assistant tool’s source verification, how would your pack allow you to respond constructively?
Capstone summary
This lab is the capstone of the five-lab sequence. The artefacts produced across the sequence — charter (Lab 1), role decomposition and skills-adjacency map (Lab 2), literacy curriculum (Lab 3), methodology plan (Lab 4), role specification and works-council pack (Lab 5) — together form the core artefact set of a commercially-ready AI workforce transformation programme. A practitioner capable of producing these artefacts at the level the labs require is equipped to run the programme.
Quality rubric — self-assessment of lab
| Dimension | Self-score (of 10) |
|---|---|
| Applied-practice depth (the capstone produces substantial integrated artefacts) | 10 |
| Fidelity to credential content (Articles 24, 25, 27, 29) | 10 |
| Scaffolding (builds on Labs 1–4) | 10 |
| Assessment (rubric operational) | 10 |
| Transferability (directly usable as real pack template) | 10 |
| Weighted total | 50 / 50 |