Skip to main content
AITM M1.4-Art03 v1.0 Reviewed 2026-04-06 Open Access
M1.4 AI Technology Foundations for Transformation
AITF · Foundations

Capability Mapping and AI-Impact Ranking

Capability Mapping and AI-Impact Ranking — Technology Architecture & Infrastructure — Applied depth — COMPEL Body of Knowledge.

13 min read Article 3 of 14

COMPEL Specialization — AITM-OMR: AI Operating Model Associate Article 3 of 10


An operating model that is not grounded in a capability map is an architectural hallucination — a set of structural decisions made without knowing what the structure is meant to deliver. Every mature enterprise-architecture method, from TOGAF to the Business Architecture Guild’s Business Capability Model, starts with capability decomposition for the same reason: the organization’s capabilities are the durable answer to what work gets done, independent of who does it or with which technology. AI operating-model design inherits the discipline. This article walks the learner through capability-map construction at three levels of depth, teaches the AI-impact ranking that turns a generic capability map into an AI-relevant one, and names the common ways this foundational artifact is built badly.

Capability maps in three layers

A capability map is a hierarchical decomposition. The standard structure, used in both TOGAF and the Business Architecture Guild reference, goes three levels deep, though some organizations push to four for specialist domains.1

Level 1 is the coarse domain view, typically ten to fifteen domains that cover the whole enterprise. For a commercial bank the L1 domains might be customer management, lending, payments, deposits and savings, treasury, risk, finance, human resources, technology, and enterprise services. For a pharmaceutical manufacturer they might be research, clinical development, regulatory affairs, commercial, manufacturing, supply chain, finance, human resources, and corporate services. L1 gives the sponsor a single-page view of the enterprise without any AI context yet — the map describes the organization first, before any AI lens is applied.

Level 2 decomposes each L1 domain into five to fifteen component capabilities. The bank’s lending domain at L2 might include product design, origination, underwriting, servicing, collections, and portfolio risk. The pharmaceutical research domain at L2 might include target identification, lead discovery, preclinical development, clinical study design, and knowledge management. L2 is the layer where most useful analysis happens: it is granular enough to differentiate capabilities meaningfully and coarse enough to remain legible to non-architects.

Level 3 decomposes each L2 capability into individual processes or sub-capabilities. Underwriting at L3 might include application intake, document verification, credit-score integration, risk-model execution, pricing, decision issuance, and audit-trail capture. Clinical study design at L3 might include protocol drafting, statistical analysis planning, site selection, and endpoint definition. L3 is where technology choices begin to attach to capability structure.

The specialist working at the Associate level builds L1 and L2 systematically, then drops to L3 selectively for capabilities that warrant deeper AI-impact analysis. A full L3 map for a large enterprise can reach hundreds of leaf nodes and becomes unusable without tooling. The map’s value is diagnostic, not encyclopedic.

[DIAGRAM: ConcentricRings — capability-map-three-layers — inner ring “L1 Domains” (10-15 labels), middle ring “L2 Capabilities” (50-100 labels), outer ring “L3 Processes” (selective, not exhaustive); primitive shows the hierarchy and makes the “selective depth at L3” design choice visible]

The AI-impact ranking

A generic capability map is an architecture artifact. The operating-model specialist’s move is to overlay an AI-impact ranking on top. The ranking answers a specific question: for each capability, what is the potential for AI to change how the capability is delivered, priced, staffed, or measured in the next three to five years.

A useful ranking uses four values, each with a named operational meaning. Transformational capabilities are those where AI changes the fundamental economics or structure — where the capability as practiced in 2030 will look materially different from the capability as practiced in 2024. Underwriting in consumer lending is a transformational capability: AI is not augmenting it at the edges but restructuring who performs it, how fast it is performed, and how accurately it is performed. Augmentative capabilities are those where AI makes the existing work faster, cheaper, or more accurate, without changing the structure of the capability itself. Contract review in legal is augmentative: lawyers still review contracts, but AI drafts, summarizes, and flags issues that would previously have taken hours. Marginal capabilities are those where AI has measurable but modest impact — some tasks automate, some workflows accelerate, but the capability’s overall shape is largely intact. Budgeting and forecasting in corporate finance is often marginal: some forecasting quality improves, but the capability’s structure does not change. Unaffected capabilities are those where AI has little realistic impact in the planning horizon. Facilities management, some regulated clinical-safety activities, and certain artisan production processes fall into this tier.

The four-level ranking is not a precise predictive instrument. It is a sorting mechanism that tells the operating-model designer where to concentrate effort. Transformational capabilities deserve intensive design attention — the operating model must explicitly plan for the structural change. Augmentative capabilities deserve moderate design attention — the operating model must provide the platform and enablement to deliver the augmentation. Marginal capabilities receive light design attention. Unaffected capabilities can safely be deprioritized in AI operating-model work.

Each capability also carries an evidence-based rationale for its ranking. BCG’s published AI Capability Assessment work with enterprise clients names evidence sources such as published benchmark studies, peer-enterprise precedent, and internal pilots.2 The rationale keeps the ranking auditable. A capability ranked transformational without evidence is opinion; a ranking backed by a named benchmark study, a cited internal pilot, and a named peer case is a defensible design input.

[DIAGRAM: Scoreboard — ai-impact-ranking-matrix — table with columns “Capability (L2)”, “AI-impact ranking (Transformational/Augmentative/Marginal/Unaffected)”, “Rationale evidence”, “Accountable owner”, “Operating-model implication”; sample rows showing mixed rankings across domains; primitive makes the ranking’s evidentiary discipline visible]

Business versus enabling capabilities

The capability map distinguishes business capabilities from enabling capabilities. Business capabilities produce outcomes that external customers value directly — underwriting a loan, dispensing a prescription, delivering a package. Enabling capabilities produce outcomes that support business capabilities — managing talent, operating IT platforms, handling finance. The distinction matters for operating-model design because AI impact concentrates differently across the two.

Business capabilities are where AI competitive advantage is won and lost. A bank that transforms its underwriting capability has changed a customer-facing economics; a bank that transforms its talent management capability has improved an internal efficiency. Both matter, but the operating-model design conversation is different. Transformational business capabilities often require new organizational structures, new partnerships, and new talent strategies. Transformational enabling capabilities often require new platforms, new vendor relationships, and new internal tooling.

The specialist’s capability map always flags whether each capability is business or enabling, because the flag shapes how the operating model will treat the capability. Confusing the two — treating a transformational business capability as if it were an enabling one — produces underspecified operating-model designs. The reverse confusion produces overspecified, expensive designs for capabilities that do not warrant the investment.

The academic-artifact failure

The most common failure pattern in capability mapping is the academic artifact — a comprehensive, beautiful capability map that no one uses. The specialist builds the map to enterprise-architecture standards, decomposes exhaustively to L3 or L4, attaches a sophisticated ranking methodology, and delivers a document that is correct in every detail and irrelevant to the operating-model decisions at hand.

Two disciplines prevent the failure. First, the map is built to the depth the operating-model questions require, not to the depth the methodology allows. If the sponsor’s question is which business domains warrant their own AI practice, L1 and L2 are usually sufficient. If the question is which processes to target in a transformation program, selective L3 depth on transformational capabilities suffices. Universal L3 or L4 depth serves only the sponsor who is buying a capability map as its own deliverable.

Second, the map is tested against a specific operating-model decision before it is finalized. If the map cannot answer the question “which capabilities warrant embedded spokes in a hybrid archetype”, it is not yet useful. If it cannot answer “which platforms does the CoE need to prioritize”, it is not yet useful. A capability map that produces defensible answers to two or three named operating-model questions is finished; a map that produces beautiful categorization but no operating-model guidance is not.

A worked example — commercial banking

A concrete example helps anchor the abstract discussion. Consider a large commercial bank at the L2 level in its lending domain. The L2 capabilities typically include product design, origination, underwriting, servicing, collections, and portfolio risk. Applying the four-level AI-impact ranking with evidence produces a differentiated picture.

Product design is typically marginal — AI tools support market analysis and pricing, but the capability’s structure remains human-led. Origination is augmentative — document extraction, intake automation, and initial-triage AI speed the process substantially without restructuring it. Underwriting is transformational — machine-learning credit-risk models, document-analysis automation, and emerging agentic workflows change who performs underwriting, how fast it occurs, and what decisions the human underwriter retains. Servicing is augmentative — conversational AI handles a rising share of customer interactions, but human servicing agents remain for complex cases. Collections is transformational — AI models segment customers, predict payment behaviour, and personalize collection strategies in ways that restructure the capability. Portfolio risk is augmentative toward transformational — AI-driven portfolio monitoring detects emerging risks faster than traditional methods without yet restructuring the capability’s governance.

The ranking pattern drives distinctly different operating-model decisions. The transformational capabilities (underwriting, collections) justify dedicated AI investment, specialist talent, and deep governance integration. The augmentative capabilities (origination, servicing, portfolio risk) justify platform-enabled augmentation but not structural transformation. The marginal capability (product design) receives the minimum attention the operating model requires to support its limited AI use. The pattern is specific enough to drive operating-model choices and general enough to apply across comparable banks.

A specialist producing the ranking for a specific bank would use the bank’s internal evidence to confirm or adjust the pattern. Banks whose underwriting differs structurally from the industry norm may rank the capability differently; banks with particularly mature or immature collections operations may rank collections differently. The general pattern is a starting point, not a conclusion.

Cross-checking the map

A single-author capability map is brittle. Three cross-checks strengthen it before it becomes an operating-model input.

The first cross-check is the business-leader review. A sample of five or six business-unit leaders review the L1-L2 decomposition and confirm that the capabilities named reflect how their units actually work. Leaders often flag capabilities that the specialist’s initial pass missed or merged inappropriately. The review takes sixty to ninety minutes per leader and uncovers more issues than any amount of desk research.

The second cross-check is the ranking defensibility review. The specialist walks through each transformational and augmentative ranking with a senior AI technologist, asking whether the named evidence supports the claim. Rankings without defensible evidence are reclassified as “unranked pending evidence” rather than allowed to stand on the specialist’s assertion alone. The review is the single most effective quality gate against speculative rankings.

The third cross-check is the missing-capability audit. The specialist reviews the map against the organization’s public strategy documents, recent earnings calls, and stated priorities. Capabilities that appear in strategy but not in the map are flagged. Capabilities that dominate the strategy but rank as marginal on the map deserve a second look — the ranking may be wrong, or the strategy may be poorly aligned to the capability structure, and either finding is valuable.

Maintaining the map over time

A capability map that is current on the day it is delivered and stale six months later has produced transient value. Organizations change — business units reorganize, capabilities are divested, new capabilities are acquired, external factors reshape what matters. The specialist’s design must include a maintenance plan for the map.

Three maintenance disciplines are common in mature practice. The first is the quarterly rank review, in which the AI-impact rankings are re-evaluated against the previous quarter’s evidence. A capability ranked augmentative in Q1 that has produced transformational evidence in Q2 warrants re-ranking. A capability ranked transformational in Q1 that has seen slower-than-expected AI impact warrants a downshift. The quarterly review keeps the ranking honest without requiring wholesale map rework.

The second discipline is the annual structural review, in which the L1 and L2 decomposition is re-examined against the organization’s current business structure. Mergers, divestments, reorganizations, and strategic pivots all reshape the decomposition at the structural level. The annual review updates the map’s skeleton rather than its ranking overlay.

The third discipline is the integration with the enterprise-architecture function where one exists. Many organizations have enterprise-architecture teams that already maintain capability models for the broader business. The AI-impact ranking overlay belongs on top of their model rather than in a separate parallel artifact. The AI operating-model specialist who contributes the ranking overlay to the enterprise-architecture function — and arranges for the ranking to be refreshed as part of the enterprise-architecture cadence — has produced a durable integration rather than a standalone document.

The role of internal data

The ranking discipline asks for evidence. Much of the evidence in early-stage engagements comes from external sources — peer benchmarks, published research, consultancy studies. As the organization’s own AI practice matures, internal data becomes the more persuasive evidence. Internal pilot results, deployed-system performance, use-case economics, and internal productivity measurements tell the specialist what is actually happening in this organization rather than what is happening in general.

The specialist’s task is to favour internal evidence over external when both are available. An external benchmark saying that AI produces a thirty-percent reduction in underwriting time in financial services is useful in the absence of internal evidence; it is much less useful when the organization has its own underwriting pilot that produced a twelve-percent reduction. The internal number is the one that should drive the ranking. Specialists who over-rely on external benchmarks and under-use internal evidence produce maps that reflect consulting-firm aggregate wisdom rather than the organization’s specific reality.

The disciplined integration of internal evidence also exposes a second benefit: it reveals whether the organization’s internal measurement is capable of supporting operating-model decisions. An organization whose internal pilots produce ambiguous or unmeasured outcomes has a deeper problem than a capability-ranking question — it lacks the measurement discipline that informs every operating-model decision downstream. The specialist who discovers the measurement gap early has discovered the most valuable finding of the engagement.

Summary

The capability map is the operating-model designer’s foundational artifact. A three-layer hierarchical decomposition, selectively deepened on capabilities with significant AI impact, overlaid with a four-level AI-impact ranking backed by named evidence, cross-checked with business leaders, produces an input the operating model can actually rest on. Capability maps fail when they become academic rather than operational, when they depth-map exhaustively rather than selectively, and when rankings are asserted rather than evidenced. Article 4 moves from the capability foundation to the first structural choice that depends on it — the design of the Centre of Excellence.


Cross-references to the COMPEL Core Stream:

  • EATF-Level-1/M1.2-Art09-Mapping-COMPEL-to-Your-Organization.md — the Core Stream anchor for mapping COMPEL onto the specific organizational capability structure
  • EATF-Level-1/M1.3-Art04-Process-Pillar-Domains-Use-Cases-and-Data.md — domain and use-case structure within the twenty-domain maturity model

Q-RUBRIC self-score: 89/100

© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.

Footnotes

  1. The Open Group, TOGAF Standard, Version 10 (2022), Section on Capability-Based Planning, https://www.opengroup.org/togaf (accessed 2026-04-19).

  2. Boston Consulting Group, “Where’s the Value in AI?” BCG Henderson Institute (2024), https://www.bcg.com/publications/2024/from-potential-to-profit-with-genai (accessed 2026-04-19).