Skip to main content
AITP M2.5-Art03 v1.0 Reviewed 2026-04-06 Open Access
M2.5 Measurement, Evaluation, and Value Realization

Maturity Progression Measurement

Maturity Progression Measurement — Value Realization & ROI — Applied depth — COMPEL Body of Knowledge.

15 min read Article 3 of 13 Evaluate
Maturity Progression Radar
Strategy & Vision Data Management Technology Platform Talent & Skills Governance & Ethics Value Realization CAPABILITY MAP
Figure 128

Maturity progression measurement is the longitudinal application of the maturity model — using repeated assessments to reveal trajectory, velocity, patterns, and problems. It is the measurement discipline most unique to the COMPEL methodology and the one that most directly connects the AITP’s diagnostic skills to the Evaluate stage of the lifecycle.

The Logic of Maturity Progression

A single maturity assessment produces a snapshot — a cross-sectional view of the organization’s Artificial Intelligence (AI) maturity at a point in time. That snapshot has diagnostic value, as explored extensively in Module 2.2. But a single snapshot cannot tell you whether the organization is improving, stagnating, or regressing. For that, you need multiple assessments conducted over time, using consistent methodology, and analyzed for change.

Maturity progression measurement involves:

  • Establishing a validated baseline during the Calibrate stage
  • Conducting subsequent assessments at defined intervals
  • Calculating and analyzing the deltas between assessment points
  • Interpreting what the pattern of change reveals about transformation effectiveness
  • Using progression data to adjust transformation strategy and resource allocation

Each of these activities requires methodological discipline. The AITP who treats reassessment as a casual check-in rather than a rigorous measurement event will produce unreliable progression data and undermine the framework’s credibility.

Baseline Establishment: Getting the Starting Point Right

Everything in maturity progression measurement depends on the quality of the baseline. If the initial assessment is inflated, subsequent assessments will show less improvement than actually occurred. If the baseline is deflated, early progression data will appear more dramatic than warranted, followed by a frustrating plateau when scores correct toward reality.

The baseline assessment should be conducted using the full rigor described in M2.2Beyond the Baseline — Advanced Assessment Philosophy. Several baseline-specific considerations apply.

Calibration Discipline

The AITP must ensure that the baseline assessment uses the same scoring criteria, evidence standards, and calibration protocols that will be used in subsequent assessments. This requires documenting not just the scores but the reasoning, evidence, and calibration decisions behind each score. When a domain receives a score of 2.5, the AITP should record what specific evidence supported that score and what would need to change for the score to advance to 3.0.

This documentation creates the interpretive anchor for future assessments. Without it, different assessors — or the same assessor months later — may apply subtly different standards, producing progression artifacts that reflect scoring drift rather than actual organizational change.

Multi-Source Validation

Baseline scores should be validated through multiple evidence sources — documentary review, stakeholder interviews, observation, and where possible, independent verification. Scores based on a single evidence source are vulnerable to bias and error that will compound over the progression timeline.

Stakeholder Agreement

The baseline assessment results should be reviewed with and accepted by key stakeholders before becoming the official baseline. This serves two purposes: it ensures that the starting point is perceived as legitimate (preventing later disputes about whether the baseline was fair), and it creates shared understanding of the organization’s current state that motivates transformation effort.

Reassessment Design

The AITP must design the reassessment approach to enable valid longitudinal comparison while remaining practical within engagement constraints.

Assessment Cadence

For full transformation engagements, the typical reassessment cadence is quarterly for a subset of high-priority domains and semi-annually for the full 18-domain assessment. Assessment-only or advisory engagements may use different cadences depending on the engagement structure.

The key principle is that reassessment intervals should be long enough for meaningful change to occur. Assessing maturity monthly creates measurement noise — organizational maturity does not shift meaningfully in thirty-day increments. Conversely, assessing only at engagement start and end misses the trajectory information that mid-engagement assessments provide.

The AITP should establish the reassessment schedule during engagement design and include it in the measurement plan described in M2.5Designing the Measurement Framework.

Methodological Consistency

Valid progression measurement requires methodological consistency across assessment points. This means:

Same domains assessed — if the baseline included all 18 domains, reassessments should cover the same scope, even if some domains are lower priority. Selectively dropping domains creates gaps in the progression picture.

Same evidence standards — if the baseline required documentary evidence to support interview findings, reassessments must apply the same standard. Relaxing evidence requirements inflates scores without actual improvement.

Same scoring calibration — the AITP should reference the baseline scoring rationale when conducting reassessments to ensure that the same criteria are being applied. Where multiple assessors are involved, calibration sessions should precede each reassessment.

Same stakeholder participation — where possible, the same stakeholders who provided evidence during the baseline should participate in reassessments. Different interviewees may have different perspectives, and changing the informant pool introduces variability that confounds progression analysis.

Full versus Targeted Reassessment

Not every reassessment needs to cover all 18 domains with full depth. The AITP may design a two-tier reassessment approach:

Targeted reassessments focus on the domains most relevant to current transformation activities. If the current quarter’s workstreams concentrate on People pillar domains (Leadership and Culture, AI Literacy and Skills, Change Readiness, Workforce Development — as established in M1.3People Pillar Domains — Leadership and Talent and M1.3People Pillar Domains — Literacy and Change), targeted reassessment may focus on those four domains while conducting abbreviated check-ins on others.

Comprehensive reassessments cover all 18 domains with full depth. These occur less frequently — typically semi-annually — and provide the complete picture needed for strategic evaluation and reporting.

The AITP must ensure that the mix of targeted and comprehensive assessments produces a complete progression record across all domains by engagement end.

Delta Analysis

Delta analysis is the calculation and interpretation of score changes between assessment points. It is the core analytical technique in maturity progression measurement.

Calculating Deltas

At its simplest, delta analysis calculates the difference between the current score and the baseline (or previous assessment) score for each domain:

Delta = Current Score - Previous Score

For a domain scored at 2.0 at baseline and 2.5 at reassessment, the delta is +0.5. For a domain scored at 3.0 at baseline and 3.0 at reassessment, the delta is 0.0.

While the arithmetic is trivial, the interpretation is not. The AITP must analyze deltas at multiple levels of aggregation:

Domain-level deltas show progress within individual domains. These are the most granular and most directly actionable.

Pillar-level deltas aggregate domain deltas within each pillar (People, Process, Technology, Governance), revealing whether transformation progress is balanced or concentrated.

Aggregate delta provides the overall maturity movement — useful for executive-level reporting but insufficient for diagnostic purposes.

Interpreting Delta Patterns

The pattern of deltas across domains and over time reveals far more than individual score changes. The AITP should look for several characteristic patterns.

Balanced advancement — relatively consistent positive deltas across most domains. This suggests a well-designed transformation that is progressing on multiple fronts simultaneously. It is the desired pattern for comprehensive transformation engagements.

Pillar concentration — strong positive deltas within one or two pillars and minimal movement in others. This may indicate that the transformation is overweighted toward certain pillars — a common pattern when technology deployment outpaces organizational change or when governance maturity lags behind capability development. The cross-domain dynamics discussed in M1.3Cross-Domain Dynamics and Maturity Profiles are directly relevant here.

Isolated spikes — dramatic improvement in a single domain with minimal movement elsewhere. This may reflect genuine focused investment or may indicate scoring inconsistency. The AITP should investigate before accepting an isolated spike as valid.

Regression — negative deltas indicating that maturity has decreased. While counterintuitive, regression does occur — typically when organizational changes (leadership turnover, reorganization, budget cuts) disrupt capabilities that had been developing. Regression is a signal for immediate investigation and intervention.

Stagnation — zero or near-zero deltas across multiple domains and assessment periods. This is the most concerning pattern because it suggests that transformation activities are not translating into maturity improvement. Stagnation diagnosis is addressed later in this article.

Weighted Delta Analysis

Not all domains are equally important to every transformation. The AITP may apply weights to domain deltas based on the transformation’s priorities, producing a weighted progression score that reflects strategic importance. For example, if governance maturity is the primary transformation objective, governance domain deltas may receive higher weights in the aggregate calculation.

Weighting should be transparent and established during measurement framework design, not applied retroactively to produce favorable results.

Velocity Measurement

Delta analysis tells you how far the organization has progressed. Velocity measurement tells you how fast it is progressing — and whether the rate of progress is accelerating, decelerating, or stable.

Calculating Velocity

Maturity velocity is the rate of score change per unit of time:

Velocity = Delta / Time Period

A domain that advances from 2.0 to 3.0 over six months has a velocity of +1.0 per six months (or approximately +0.17 per month). A domain that advances the same distance over twelve months has half the velocity.

Velocity is useful for:

Progress forecasting — extrapolating current velocity to estimate when target maturity levels will be reached. This informs roadmap adjustments and stakeholder expectation management.

Resource allocation — domains with low velocity relative to their importance may warrant additional investment. Domains with high velocity may be able to sustain progress with reduced support.

Cross-domain comparison — comparing velocity across domains reveals where transformation energy is producing results and where it is not.

The most informative velocity analysis examines trends over multiple assessment periods. Three or more data points are needed to establish a meaningful trend.

Accelerating velocity — maturity is advancing at an increasing rate. This is characteristic of domains where foundational investments are beginning to compound. It is an encouraging signal that often precedes breakthrough advancement.

Decelerating velocity — maturity advancement is slowing. This is common as organizations approach higher maturity levels, where the requirements for advancement become more demanding. It can also signal resource constraints, change fatigue, or implementation barriers.

Stable velocity — a consistent rate of advancement. This is the most sustainable pattern and typically characterizes well-managed transformation workstreams.

Stalled velocity — velocity has dropped to zero or near zero. This requires immediate diagnostic attention, as discussed in the stagnation section below.

Trend Identification and Maturity Trajectories

Beyond individual domain analysis, the AITP should examine transformation-wide trends that emerge from the aggregate progression data.

The Maturity Trajectory Curve

Most organizations’ maturity trajectories follow a characteristic pattern when mapped over time:

Initial rapid advancement in the first assessment period, as the transformation addresses the most obvious gaps and implements foundational capabilities. Early gains often come from relatively straightforward interventions — establishing policies that did not exist, deploying technology that was already available, implementing training that was already designed.

Mid-engagement moderation as the transformation tackles more complex changes that require deeper organizational shifts. The rate of advancement slows as improvements require behavior change, process embedding, and cultural evolution rather than simple implementation.

Late-engagement consolidation where scores stabilize at a level that reflects the organization’s genuine operating maturity. Further advancement requires sustained practice and organizational learning, not just project-based intervention.

This pattern is normal, and the AITP should set stakeholder expectations accordingly during engagement design. The pursuit of linear, constant-rate improvement is unrealistic and creates unnecessary pressure that can distort measurement practices.

Benchmark Comparisons

Where the AITP has access to benchmark data — maturity scores from comparable organizations or industry norms — progression can be contextualized against external reference points. Benchmarks help answer whether the organization is progressing faster or slower than peers and whether achieved maturity levels are competitive for the industry and organization size.

The AITP should use benchmarks carefully. Comparability requires similar assessment methodologies, organizational contexts, and maturity model interpretations. Cross-client benchmarks are more reliable when the AITP or firm has conducted both assessments using the same COMPEL assessment methodology.

When Maturity Scores Plateau: Diagnosis and Intervention

Maturity plateaus — periods where scores stop advancing despite ongoing transformation activity — are among the most important signals the AITP can detect. They require diagnostic investigation, not just reporting.

Diagnosing Plateau Causes

Plateaus have multiple potential causes, and the AITP must distinguish between them because each requires a different response.

Ceiling effects — the organization has reached the maturity level achievable through the current transformation approach. Advancing further requires fundamentally different interventions (for example, moving from policy implementation to culture change, or from tool deployment to organizational integration). This is a natural plateau and signals the need for transformation approach evolution, not a problem with execution.

Absorption limits — the organization is absorbing change at its maximum rate, and additional interventions cannot produce further advancement until current changes are digested. This connects to the change saturation concepts in organizational readiness (M1.6Change Management for AI Transformation). The appropriate response is to reduce the pace of new change and allow consolidation.

Execution gaps — planned transformation activities are not occurring, or they are occurring but with insufficient quality or scale. The appropriate response is to investigate and address execution barriers — which connects to the execution management disciplines in Module 2.4: Execution Management and Delivery Excellence.

Measurement artifacts — the plateau may reflect scoring drift or assessment inconsistency rather than actual stagnation. The AITP should verify methodological consistency before concluding that the organization has genuinely plateaued.

Structural barriers — organizational structures, incentives, or power dynamics are preventing advancement beyond a certain level. A governance domain may plateau at Developing (Level 2) because senior leadership has not genuinely committed to AI governance despite endorsing it rhetorically. These barriers require political and organizational intervention, not more transformation activity.

Intervention Strategies

Based on diagnosis, the AITP designs appropriate interventions:

For ceiling effects: redesign the transformation approach for the domain, introducing interventions appropriate to the next maturity level. The maturity level descriptors for each domain (as detailed in Module 1.3) provide guidance on what characterizes higher levels and what interventions support advancement.

For absorption limits: slow the pace of change, focus on embedding and consolidating recent improvements, and revisit the change management approach with attention to organizational capacity.

For execution gaps: escalate through the engagement governance structure, address resource constraints, and potentially restructure workstreams. Connect to the execution management techniques in Module 2.4.

For measurement artifacts: recalibrate the assessment methodology, conduct inter-rater reliability checks, and if necessary, restate scores with improved methodology.

For structural barriers: engage executive sponsors to address the underlying organizational dynamics. These conversations require political skill and professional courage, as discussed in M2.1The AITP as Engagement Leader — Professional Practice and Ethics.

Communicating Maturity Progression

Maturity progression data must be communicated effectively to drive decisions and maintain stakeholder engagement. The AITP designs communication approaches tailored to different audiences.

Executive Reporting

For executive audiences, present aggregate and pillar-level progression with clear trend indicators. Use visual formats — progress charts showing movement from baseline through successive assessments, radar charts comparing current maturity profiles to baseline and target profiles, and traffic light indicators for domains at risk of underperformance.

The narrative should connect maturity progression to business outcomes wherever possible. A score improvement from 2.0 to 3.0 in a Process domain is meaningful to the AITP but abstract to the Chief Financial Officer. Translating that improvement into operational terms — “the organization has moved from ad hoc data management to a governed, repeatable data pipeline that reduces model training time by forty percent” — makes maturity progression tangible.

Working-Level Reporting

For domain owners and transformation team members, present domain-level detail with specific evidence of score changes, areas of strength, and areas requiring attention. Working-level reporting should be actionable — connected to specific improvement activities and accountable to specific team members.

Progression Against Plan

The AITP should compare actual maturity progression against the targets established in the transformation roadmap (Module 2.3: Transformation Roadmap Architecture). This comparison reveals whether the transformation is on track, ahead, or behind plan — providing the basis for governance decisions about resource allocation, timeline adjustments, and strategy refinement.

When actual progression deviates from planned progression, the AITP must present both the deviation and a diagnosis of its causes, enabling informed decisions rather than reactive responses.

Looking Ahead

Maturity progression measurement provides the COMPEL-specific lens for evaluating transformation progress. But organizations invest in AI transformation to create business value, not to improve scores on a maturity model. Article 4 addresses the measurement challenge that matters most to executive sponsors — quantifying the business value and Return on Investment of the transformation program.


© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.