Skip to main content
AITE M1.4-Art75 v1.0 Reviewed 2026-04-06 Open Access
M1.4 AI Technology Foundations for Transformation
AITF · Foundations

Template 5 — People and Change KPI Tree

Template 5 — People and Change KPI Tree — Technology Architecture & Infrastructure — Advanced depth — COMPEL Body of Knowledge.

8 min read Article 75 of 48

COMPEL Specialization — AITE-WCT: AI Workforce Transformation Expert Artifact Template 5 of 5


How to use this template

Populate one KPI tree per programme. The tree is reviewed weekly at programme level (Level 3 metrics), monthly at steering level (Level 2 drivers), quarterly at board level (Level 1 outcomes), and fully redesigned annually.

The tree is a living document. Version it on each annual redesign; mark material in-cycle changes (e.g., new drivers added mid-year) with a revision date.


People and Change KPI Tree

Tree header

FieldValue
Programme name
Tree version1.0
Tree dateYYYY-MM-DD
Tree author
Tree owner (standing role)
Next full redesignYYYY-MM-DD (annual)
Standing review rhythmweekly programme / monthly steering / quarterly board / annual redesign

Level 1 — Outcomes

The aggregate states the transformation is designed to produce. Organisation-level, multi-year, board-meaningful.

Standard outcomes

OutcomeDefinitionYear-1 targetYear-2 targetYear-3 targetSource of truth
Productivityspecific definition — output per unit of input, domain-adjusted
Retentionvoluntary exit rate, filtered and segmented
Engagementengagement score from specific instrument
Capability (optional)composite: literacy coverage + skills adjacency + proficiency

Outcome choice rationale

Why these three (or four). If only three, why not four; if four, why capability is an outcome rather than a driver.


Level 2 — Drivers

Intermediate states that produce the outcomes. Programme-level, shorter-horizon.

Drivers for productivity

DriverDefinitionYear-1 targetYear-2 targetYear-3 targetFeeds outcome
AI-tool adoption depthfraction of target workflow using AI toolProductivity
AI-tool quality-of-useproxy-measure quality of AI useProductivity
Manager coaching cadencefraction of manager-report pairs with weekly 1-to-1 in past 4 weeksProductivity + Engagement
Workflow-friction reductioncompletion of named upstream/downstream process changesProductivity

Drivers for retention

DriverDefinitionYear-1 targetYear-2 targetYear-3 targetFeeds outcome
Psychological-safety scoreteam-level safety measure (Article 30)Retention + Engagement
Role claritysurvey-measured clarity of AI-augmented roleRetention
Growth-path visibilitysurvey-measured career-path clarityRetention
Recognition alignmentrecognition pattern alignment with stated valuesRetention + Engagement
Compensation competitivenesspay-band competitiveness for key talent segmentsRetention

Drivers for engagement

DriverDefinitionYear-1 targetYear-2 targetYear-3 targetFeeds outcome
Meaningfulnesssurvey-measured meaning of AI-integrated workEngagement
Autonomysurvey-measured appropriate latitudeEngagement
Masterysurvey-measured capability developmentEngagement
Inclusionsurvey + behavioural-indicator compositeEngagement + Retention

Drivers for capability (if Level 1)

DriverDefinitionYear-1 targetYear-2 targetYear-3 targetFeeds outcome
Literacy coveragerole-to-level map fulfilmentCapability
Skills-adjacency coverageskills-graph coverage vs roadmapCapability
Observed proficiency in AI-integrated taskssampled proficiency assessmentCapability

Level 3 — Metrics

Specific measurements that let the organisation know whether the drivers are moving. Programme-operational, weekly to monthly.

Discipline: 3–5 metrics per driver. More is clutter.

Metrics for “AI-tool adoption depth”

MetricSpecific definitionSource systemComputation methodOwnerRefresh cadenceCurrent valueTrajectory
e.g., Fraction of target workflow invocations using AI toolAI-tool telemetrynumerator/denominator per named querydaily
e.g., Distribution of use-depth across user population
e.g., Month-on-month trajectory of active users

Metrics for “Manager coaching cadence”

MetricSpecific definitionSource systemComputation methodOwnerRefresh cadenceCurrent valueTrajectory
e.g., Fraction of manager-direct-report pairs with weekly 1-to-1 in past 4 weekscalendar dataweekly
e.g., Fraction of 1-to-1s including AI-coaching contentsampled review + direct-report surveymonthly
e.g., Direct-report-reported coaching usefulnessengagement surveyquarterly

Metrics for “Psychological-safety score”

MetricSpecific definitionSource systemComputation methodOwnerRefresh cadenceCurrent valueTrajectory
e.g., Team-level safety score (Edmondson instrument)engagement platformannual with pulse cycles
e.g., Behavioural indicator: escalated concerns per quartergovernance-escalation logquarterly

Metrics for “Retention risk segment”

MetricSpecific definitionSource systemComputation methodOwnerRefresh cadenceCurrent valueTrajectory
e.g., Voluntary exit rate for high-performer segmentHRISquarterly
e.g., Intent-to-stay survey item (high-performer segment)engagement platformsemi-annual
e.g., External hiring signal (job-board benchmarks for AI-adjacent roles)external labour-market datamonthly

Repeat for each driver. 3–5 metrics per driver.


Metric quality checks

For each metric, confirm:

  • Specific — measured in a defined way, from a defined source.
  • Reliable — measurement consistent across time and observers.
  • Valid — actually measures what the driver claims.
  • Actionable — a change in the metric provides information the programme can act on.

Bias check

  • No vanity metrics at Level 1 or Level 2 (per Article 33).
  • Proxy metrics at Level 3 are combined with direct measurement where possible.
  • Goodhart’s Law risk reviewed: if the programme optimises the metric, does the underlying construct drift?

Wiring documentation

For each data source used:

Source systemMetric(s) populatedWiring pattern (API / extract / manual)Automation statusOwnerRefresh latency
HRIS (specific system)retention metrics, role data, tenure
LMS (specific system)literacy coverage metrics
Engagement platformsurvey-based metrics
AI-tool telemetryadoption metrics
Calendar systemcoaching cadence proxy
Governance logescalation metrics

Wiring completeness check

  • All Level 3 metrics have a documented source.
  • All automated metrics have been validated against source-system meaning.
  • Manual-computation metrics have documented computation and owner.

Review rhythm

Weekly (programme)

  • Review Level 3 metrics.
  • Investigate anomalies.
  • Decide short-term interventions.
  • No tree modifications.

Monthly (steering)

  • Review Level 2 drivers.
  • Review aggregate Level 3 view.
  • Decide intervention adjustments.
  • Modify tree only on explicit decision.

Quarterly (board)

  • Review Level 1 outcomes with Level 2 context.
  • Strategic adjustments.
  • Board engagement with forward commitments.

Annual (full redesign)

  • Full tree re-examination.
  • New tree version.
  • Recalibration against observed outcome trajectory.

Change-log

VersionDateChangesApprover
1.0initial
1.1

Quality rubric — self-assessment of template

DimensionSelf-score (of 10)
Three-level structure (Articles 33)10
Metric discipline (3–5 per driver)10
Wiring rigour (documented source, computation, owner, refresh)10
Quality checks (specific / reliable / valid / actionable; bias; Goodhart)10
Review-rhythm integration10
Weighted total50 / 50