Skip to main content
AITM M1.4-Art04 v1.0 Reviewed 2026-04-06 Open Access
M1.4 AI Technology Foundations for Transformation
AITF · Foundations

Centre of Excellence Design

Centre of Excellence Design — Technology Architecture & Infrastructure — Applied depth — COMPEL Body of Knowledge.

12 min read Article 4 of 14

COMPEL Specialization — AITM-OMR: AI Operating Model Associate Article 4 of 10


The AI Centre of Excellence is the operating-model component most often stood up, and most often misbuilt. The misbuild usually follows a consistent pattern: the organization announces a CoE, staffs it with a combination of returned consultants and newly hired specialists, funds it from a central IT budget, and charters it to “lead AI across the enterprise”. Within eighteen months the CoE is either a bottleneck the business units route around or a cost centre the CFO is asking to justify. The specialist’s work is to scope the CoE as a service provider with explicit services, an explicit charter, explicit funding, and explicit measurement of whether it is producing net value. This article walks through those five scoping layers. It does not advocate for any particular CoE shape — the size and scope depend on the archetype chosen in Article 2 and the capability map from Article 3.

The CoE as service provider

The most important conceptual move in CoE design is to stop thinking of the CoE as a leadership entity and start thinking of it as a service provider. A leadership entity issues directives; a service provider offers a catalogue of services that business units choose to consume. The distinction shapes every downstream decision. A service provider measures itself on what its consumers adopt; a leadership entity measures itself on what it publishes. A service provider’s accountability runs toward the business units it serves; a leadership entity’s accountability runs toward executive sponsors. In a functioning hybrid operating model the CoE is always the former.

The service-provider framing does not make the CoE subordinate or reduce its strategic importance. A CoE with a strong catalogue of services that business units actively consume is more strategically important than a leadership entity whose directives the business units ignore. The framing simply clarifies what produces value. The published case of Microsoft’s internal AI CoE, described in multiple company engineering and research posts, emphasizes the catalogue-and-consumption model and names the internal NPS score of the platform’s consumers as a primary accountability metric.1 The specialist designing a new CoE inherits this design language: services, catalogue, consumption, satisfaction.

Five service families

The CoE’s service catalogue typically spans five families. Not every CoE offers all five; scope depends on archetype and maturity.

Platform services provide the shared technology stack that AI work is built on — the model inference layer, the grounding and retrieval infrastructure, the prompt-management tooling, the evaluation harness, the observability stack. Platform is the most expensive service family to stand up and operate, and the most valuable when done well because it eliminates per-team platform duplication.

Standards services define the policies, patterns, and practices that all AI work in the organization must follow. Model risk classification, data-access controls, evaluation floors, safety requirements for agentic systems, and documentation templates sit here. Standards are cheap to produce and expensive to enforce. A CoE that produces standards but lacks the enforcement mechanism has produced wallpaper.

Enablement services build the skill of AI practitioners outside the CoE. Training programmes, pair-delivery engagements, office-hours, communities of practice, and certification programmes belong here. Enablement is the service family where the hub-spoke hybrid archetype lives or dies: a hub that does not actively build spoke capability produces dependency rather than distribution.

Governance services operate the control structures that make AI work auditable — risk classification, architecture review, model card generation, post-deployment monitoring, incident response. Governance services are the most regulation-sensitive and the most often the reason a CoE exists in a regulated industry in the first place.

Advisory services provide senior practitioner time to business units for use-case scoping, solution architecture, and deep-problem consultation. Advisory is the scarcest service family because senior practitioner time is the organization’s most limited resource. A CoE that gives advisory away for free will exhaust it; one that rations it well will produce disproportionate impact.

[DIAGRAM: HubSpoke — coe-service-catalogue — central hub labelled “Centre of Excellence” with five outward spokes labelled with one service family each (Platform, Standards, Enablement, Governance, Advisory); around the ring sit business-unit consumer nodes with arrows indicating consumption of specific services; primitive shows the CoE as a service-catalogue-and-consumption structure rather than a leadership hierarchy]

The CoE charter

The CoE charter is the short document that names what the CoE is for, what it will and will not do, and how its value will be measured. A good charter is two pages. A long charter is a warning sign.

Five sections cover the ground. Purpose — the one-sentence reason the CoE exists, usually expressed as the value it will produce for business-unit customers. Scope — the service families the CoE will deliver, with named exclusions for the service families it will not. Customers — the named business units or functions the CoE serves, with a service-level expectation for each. Funding model — how the CoE is paid for (the topic of Article 6 in full detail) and what the funding buys. Measurement — the two to four metrics the CoE will be accountable for, and to whom.

The explicit exclusions are as important as the inclusions. A CoE that lists what it does not do has taken a position that the sponsor can hold it to. A CoE whose scope is open-ended has taken no position and will be blamed for every AI failure across the enterprise irrespective of whether the CoE was responsible.

A well-known published case is the Lloyds Banking Group AI CoE, whose evolution has been documented in multiple Harvard Business Review and McKinsey-led studies between 2022 and 2024.2 The bank’s CoE charter, as described in publicly available materials, explicitly scoped the CoE to platform, standards, and governance services, and explicitly excluded use-case delivery — use cases are delivered by business-unit teams using the CoE’s services. The exclusion is what made the CoE sustainable: without it, the CoE would have become a demand-flooded bottleneck.

Staffing shape

The staffing mix of a CoE depends on which service families it delivers and the depth at which it delivers them. A platform-heavy CoE is predominantly engineers — ML engineers, platform engineers, security engineers, site reliability engineers, and a small product layer. A standards-heavy CoE carries a larger policy, governance, and ethics staff. An enablement-heavy CoE carries a training and developer-experience staff. An advisory-heavy CoE carries senior practitioners across the AI stack.

Three staffing principles apply across shapes. First, skill depth over headcount. A CoE with ten deep practitioners outperforms a CoE with thirty adequate ones; the services the CoE offers depend on technical depth, and thin staffing produces thin services. Second, rotation with the business units. Senior CoE practitioners should rotate into business-unit engagements and senior business-unit practitioners should rotate into the CoE. A CoE that isolates its staff produces a skill monoculture and an organizational gap. Third, career path discipline. A specialist who joins the CoE should see a clear career path — to senior practitioner, to practice lead, or to a deliberate move into a business-unit role — rather than a dead-end position. Without the career-path discipline, retention collapses and the CoE loses the depth that justifies its existence.

Value measurement

A CoE that cannot prove its value gets defunded. The specialist’s design task is to build value measurement into the CoE from the outset.

Four measurement categories cover most useful CoE measurement. Output metrics count what the CoE produces — platform uptime, number of standards published, number of engagements completed, models in production. Output metrics are necessary but never sufficient: high output does not prove value if consumption is low. Consumption metrics measure what business units actually use — platform active users, standards-compliance rates, training programme completions, advisory engagements requested. High consumption validates that the service catalogue reflects real demand. Satisfaction metrics measure how well consumption went — internal NPS scores, survey responses, qualitative escalations. Satisfaction without consumption means polite indifference; consumption without satisfaction means captive demand. Outcome metrics trace CoE services back to business outcomes — time-to-production for use cases built on the CoE platform, incident rates for AI systems governed by CoE standards, adoption rates for trained cohorts. Outcome metrics are the hardest to attribute cleanly and the most defensible when defensible.

[DIAGRAM: Scoreboard — coe-value-dashboard — four-row table with rows “Output”, “Consumption”, “Satisfaction”, “Outcome”, each with three sample metrics and traffic-light indicators; primitive shows the full measurement surface the CoE is designed to be accountable against]

The specialist publishes the measurement framework in the charter and commits to quarterly review with the sponsor. A CoE that hides from measurement is building a defunding event; a CoE that leads with measurement keeps its sponsor informed and its value visible.

The sustainment question

A final design consideration: how long will the CoE last in its current form. Most CoEs should not be permanent. In hybrid archetypes, the CoE’s enablement and advisory services are explicitly designed to devolve capability into business units over time. A three-to-five-year sustainment horizon, after which the CoE either contracts to a platform-and-governance core or converts to a platform-team-within-engineering, is honest design. A CoE chartered in perpetuity is a political structure rather than an operating-model component.

The sustainment horizon matters because it shapes staffing, career-path, and investment decisions. An CoE with a five-year horizon justifies deep platform investment; one chartered only to set initial standards does not. The specialist’s charter should name the expected sustainment horizon explicitly, with the decision trigger that would revise it — typically a maturity-level achievement across the spokes that would make the current CoE scope redundant.

CoE failure modes in practice

Four failure modes appear often enough in published case studies and consulting-firm retrospectives to warrant naming.

The empire-building failure occurs when the CoE’s leadership expands scope beyond what the charter supports — absorbing capabilities from business units, claiming decision rights that belong elsewhere, or growing headcount beyond the service catalogue justifies. Empire-building CoEs look successful in the short term (more staff, more visibility) and collapse in the medium term (defunded when the scope proves unsupportable or when the broader organization rebels). The corrective is strict charter discipline: the charter names explicit exclusions, and the CoE director is held accountable for staying within scope.

The advisory-only failure occurs when the CoE is chartered only to advise and never to build. Advisory CoEs produce thought leadership, frameworks, and recommendations that the business units are expected to implement. The pattern fails because advice without operational engagement has limited traction — business units that cannot consume advice as an actionable service will not consume it. The corrective is to pair advisory with operational capability in at least some service families (platform, enablement).

The platform-team-that-forgot-its-purpose failure occurs when the CoE’s platform team drifts into building end-user use cases rather than platform capabilities. The drift is natural — platform engineers are talented practitioners and get asked to help with business-unit problems. Over eighteen months the platform team has become a use-case team and the platform stops advancing. The corrective is explicit guard-rails at the CoE director level that prohibit platform engineers from use-case engagements beyond a small advisory allocation.

The governance-theater failure occurs when the CoE’s governance services produce documents and reviews that the organization performs mechanically without producing decisions. AI ethics boards that approve everything, model risk classifications that never change a deployment decision, and architecture reviews that do not reject proposals are the symptoms. The corrective is to measure the governance service by its decision impact — how many submissions are returned for rework, how many are rejected — not by its submission volume.

Each failure mode is subtle in early stages and painful to reverse late. The specialist designing a CoE builds the preventions into the charter, the measurement framework, and the director’s accountability rather than assuming the failures will not happen.

Rotation discipline in detail

The rotation discipline mentioned in the staffing section deserves its own treatment, because rotation is the single design feature that most reliably prevents several of the failure modes above.

Three rotation patterns are common. Inbound rotation brings business-unit practitioners into the CoE for a named period (typically six to twelve months). The inbound rotators bring domain context into the CoE’s platform and standards work, preventing the CoE from designing disconnected from the business-unit reality. Outbound rotation sends CoE practitioners into business-unit engagements for a named period. The outbound rotators build business-unit capability through direct work and return to the CoE with fresh perspective on what the service catalogue is actually producing. Structured rotation into leadership roles creates career paths that run from technical practice through CoE leadership through business-unit leadership, building a cadre of executives who understand both central and embedded AI work at first hand.

Rotation is expensive in the short term — rotating staff are less productive in the first ninety days of each new role — and valuable in the medium to long term. Organizations that skip rotation produce CoEs whose staff develop increasingly narrow understanding of the business units they serve. Organizations that commit to rotation build the mutual familiarity that makes hybrid archetypes actually work.

Summary

A Centre of Excellence is a service provider with a catalogue, a charter, a staffing shape fit to its services, a funding model, and an explicit measurement framework. Its scope depends on the archetype chosen and the capability map built. A CoE that thinks of itself as a leadership entity rather than a service provider builds for the wrong audience and fails. A CoE that hides from value measurement sets up its own defunding. A CoE without an exit or evolution plan becomes a political institution rather than an operating-model component. Article 5 moves from CoE design to the decision-rights architecture that spans the whole operating model — the layer that determines who decides what, across hub and spoke alike.


Cross-references to the COMPEL Core Stream:

  • EATF-Level-1/M1.6-Art04-The-AI-Center-of-Excellence.md — Core Stream foundational treatment of the CoE concept within COMPEL’s People pillar

Q-RUBRIC self-score: 89/100

© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.

Footnotes

  1. Microsoft Research, “AI Platform engineering at Microsoft” (various posts, 2022–2024), https://www.microsoft.com/en-us/research/ (accessed 2026-04-19).

  2. Harvard Business Review case discussions on Lloyds Banking Group digital transformation, 2022–2024, published at https://hbr.org/ (accessed 2026-04-19).