AITE M1.4-Art75 | v1.0 | Reviewed 2026-04-06 | Open Access
M1.4 AI Technology Foundations for Transformation
AITF · Foundations
Template 5 — People and Change KPI Tree Template 5 — People and Change KPI Tree — Technology Architecture & Infrastructure — Advanced depth — COMPEL Body of Knowledge.
COMPEL Lifecycle
C Calibrate → O Organize → M Model → P Produce → E Evaluate → L Learn COMPEL Specialization — AITE-WCT: AI Workforce Transformation Expert
Artifact Template 5 of 5
How to use this template
Populate one KPI tree per programme. The tree is reviewed weekly at programme level (Level 3 metrics), monthly at steering level (Level 2 drivers), quarterly at board level (Level 1 outcomes), and fully redesigned annually.
The tree is a living document. Version it on each annual redesign; mark material in-cycle changes (e.g., new drivers added mid-year) with a revision date.
People and Change KPI Tree
Field Value Programme name Tree version 1.0 Tree date YYYY-MM-DD Tree author Tree owner (standing role) Next full redesign YYYY-MM-DD (annual) Standing review rhythm weekly programme / monthly steering / quarterly board / annual redesign
Level 1 — Outcomes
The aggregate states the transformation is designed to produce. Organisation-level, multi-year, board-meaningful.
Standard outcomes
Outcome Definition Year-1 target Year-2 target Year-3 target Source of truth Productivity specific definition — output per unit of input, domain-adjusted Retention voluntary exit rate, filtered and segmented Engagement engagement score from specific instrument Capability (optional) composite: literacy coverage + skills adjacency + proficiency
Outcome choice rationale
Why these three (or four). If only three, why not four; if four, why capability is an outcome rather than a driver.
Level 2 — Drivers
Intermediate states that produce the outcomes. Programme-level, shorter-horizon.
Drivers for productivity
Driver Definition Year-1 target Year-2 target Year-3 target Feeds outcome AI-tool adoption depth fraction of target workflow using AI tool Productivity AI-tool quality-of-use proxy-measure quality of AI use Productivity Manager coaching cadence fraction of manager-report pairs with weekly 1-to-1 in past 4 weeks Productivity + Engagement Workflow-friction reduction completion of named upstream/downstream process changes Productivity
Drivers for retention
Driver Definition Year-1 target Year-2 target Year-3 target Feeds outcome Psychological-safety score team-level safety measure (Article 30) Retention + Engagement Role clarity survey-measured clarity of AI-augmented role Retention Growth-path visibility survey-measured career-path clarity Retention Recognition alignment recognition pattern alignment with stated values Retention + Engagement Compensation competitiveness pay-band competitiveness for key talent segments Retention
Drivers for engagement
Driver Definition Year-1 target Year-2 target Year-3 target Feeds outcome Meaningfulness survey-measured meaning of AI-integrated work Engagement Autonomy survey-measured appropriate latitude Engagement Mastery survey-measured capability development Engagement Inclusion survey + behavioural-indicator composite Engagement + Retention
Drivers for capability (if Level 1)
Driver Definition Year-1 target Year-2 target Year-3 target Feeds outcome Literacy coverage role-to-level map fulfilment Capability Skills-adjacency coverage skills-graph coverage vs roadmap Capability Observed proficiency in AI-integrated tasks sampled proficiency assessment Capability
Level 3 — Metrics
Specific measurements that let the organisation know whether the drivers are moving. Programme-operational, weekly to monthly.
Discipline: 3–5 metrics per driver. More is clutter.
Metric Specific definition Source system Computation method Owner Refresh cadence Current value Trajectory e.g., Fraction of target workflow invocations using AI tool AI-tool telemetry numerator/denominator per named query daily e.g., Distribution of use-depth across user population e.g., Month-on-month trajectory of active users
Metrics for “Manager coaching cadence”
Metric Specific definition Source system Computation method Owner Refresh cadence Current value Trajectory e.g., Fraction of manager-direct-report pairs with weekly 1-to-1 in past 4 weeks calendar data weekly e.g., Fraction of 1-to-1s including AI-coaching content sampled review + direct-report survey monthly e.g., Direct-report-reported coaching usefulness engagement survey quarterly
Metrics for “Psychological-safety score”
Metric Specific definition Source system Computation method Owner Refresh cadence Current value Trajectory e.g., Team-level safety score (Edmondson instrument) engagement platform annual with pulse cycles e.g., Behavioural indicator: escalated concerns per quarter governance-escalation log quarterly
Metrics for “Retention risk segment”
Metric Specific definition Source system Computation method Owner Refresh cadence Current value Trajectory e.g., Voluntary exit rate for high-performer segment HRIS quarterly e.g., Intent-to-stay survey item (high-performer segment) engagement platform semi-annual e.g., External hiring signal (job-board benchmarks for AI-adjacent roles) external labour-market data monthly
Repeat for each driver. 3–5 metrics per driver.
Metric quality checks
For each metric, confirm:
Bias check
Wiring documentation
For each data source used:
Source system Metric(s) populated Wiring pattern (API / extract / manual) Automation status Owner Refresh latency HRIS (specific system) retention metrics, role data, tenure LMS (specific system) literacy coverage metrics Engagement platform survey-based metrics AI-tool telemetry adoption metrics Calendar system coaching cadence proxy Governance log escalation metrics
Wiring completeness check
Review rhythm
Weekly (programme)
Review Level 3 metrics.
Investigate anomalies.
Decide short-term interventions.
No tree modifications.
Monthly (steering)
Review Level 2 drivers.
Review aggregate Level 3 view.
Decide intervention adjustments.
Modify tree only on explicit decision.
Quarterly (board)
Review Level 1 outcomes with Level 2 context.
Strategic adjustments.
Board engagement with forward commitments.
Annual (full redesign)
Full tree re-examination.
New tree version.
Recalibration against observed outcome trajectory.
Change-log
Version Date Changes Approver 1.0 initial 1.1
Quality rubric — self-assessment of template
Dimension Self-score (of 10) Three-level structure (Articles 33) 10 Metric discipline (3–5 per driver) 10 Wiring rigour (documented source, computation, owner, refresh) 10 Quality checks (specific / reliable / valid / actionable; bias; Goodhart) 10 Review-rhythm integration 10 Weighted total 50 / 50