Skip to main content
AITM M1.4-Art51 v1.0 Reviewed 2026-04-06 Open Access
M1.4 AI Technology Foundations for Transformation
AITF · Foundations

Lab: Design a CoE for a 5,000-Person Organization

Lab: Design a CoE for a 5,000-Person Organization — Technology Architecture & Infrastructure — Applied depth — COMPEL Body of Knowledge.

8 min read Article 51 of 14

COMPEL Specialization — AITM-OMR: AI Operating Model Associate Lab 1 of 2


Lab brief

Meridian Logistics is a fictional composite drawn from publicly described mid-market logistics and supply-chain enterprises. The company has approximately 5,000 employees across four business units: freight forwarding, contract logistics, customs brokerage, and a small technology-services subsidiary that sells supply-chain software to external customers. Annual revenue sits around 1.8 billion US dollars. AI work has been scattered across business units for two years; the board has now approved a dedicated central AI capability and has asked the chief operating officer to stand up the structure. The COO has retained you, as an AITM-OMR-credentialed specialist, to design the CoE. This lab walks the design from archetype decision through to charter, service catalogue, staffing, and measurement.

Lab inputs (summarized)

You have the following evidence at intake:

  • Interviews with the CEO, COO, CFO, and the four business-unit leaders.
  • A scan of current AI activity across business units: freight forwarding has built a dynamic-pricing model; contract logistics has two embedded data scientists working on labour-planning; customs brokerage has no AI work; the technology subsidiary embeds AI in several of its customer products and has its own product AI team.
  • The organization’s strategy document, which names “AI-powered supply-chain services” as a five-year commercial ambition.
  • A current-state technology architecture diagram showing a hybrid cloud environment (AWS and Azure) with a partial data-lake implementation.
  • The finance function’s most recent AI spend estimate: 12 million US dollars annually, concentrated in cloud infrastructure and consulting fees, with limited visibility into per-business-unit consumption.
  • Regulatory context: the company operates across the EU, US, and Asia-Pacific, with the EU AI Act already in force and US and APAC frameworks emerging.
  • Human resources data: seven data scientists and four ML engineers distributed across the business units; one head of data (reporting to the CIO) whose remit has recently been expanded to include AI.

Exercise 1 — Archetype selection (15 minutes)

Work through the archetype-selection criteria from Article 2. For each criterion, write one or two sentences naming what the evidence says about Meridian’s fit to each archetype:

  • Strategic scope — is Meridian’s AI ambition concentrated or distributed?
  • Organizational maturity — where does Meridian sit on the maturity curve?
  • Risk posture and regulatory exposure — how does the EU AI Act shape the choice?
  • Cultural tolerance for central control — what do the business-unit leaders’ interviews suggest?

Then select an archetype and write a two-paragraph defense. The defense should name which of the five archetypes (centralized, federated, embedded, hybrid, platform) you have chosen, why the other four are less fit, and what risks your chosen archetype carries that you will need to design against.

A candidate archetype choice might read: “Hybrid (hub-and-spoke). The technology subsidiary’s product-embedded AI practice is mature enough to function as a spoke; the contract logistics embedded team is a spoke-in-formation; freight forwarding needs a spoke stood up; customs brokerage is too small and specialist for its own spoke and should consume services from the hub. The CoE hub owns platform, standards, and enablement. Centralized is rejected because the technology subsidiary’s product practice is too mature to absorb centrally. Federated is rejected because the customs brokerage spoke would be uneconomic and because standards discipline is needed across the business units given the EU AI Act exposure.”

Exercise 2 — Draft the CoE charter (20 minutes)

Draft the CoE charter using the five-section structure from Article 4:

  • Purpose — one sentence naming the value the CoE will produce for its business-unit customers.
  • Scope — the service families the CoE will deliver, with explicit exclusions naming the families the CoE will not own.
  • Customers — the four business units named explicitly with a service-level expectation for each.
  • Funding model — a placeholder naming the funding approach (hybrid: central budget for platform, showback for advisory) that Exercise 4 will detail.
  • Measurement — the three to four metrics the CoE will be accountable for.

The charter should be about one page of dense text or the equivalent in structured bullets. A one-sentence purpose statement might read: “The Meridian AI CoE exists to provide shared AI platform, enabling standards, and on-demand expertise to the four business units, accelerating their AI-enabled services without forcing each business unit to build platform and governance from scratch.”

Exercise 3 — Design the service catalogue (20 minutes)

For each of the five service families from Article 4 (platform, standards, enablement, governance, advisory), decide whether the CoE will deliver that family at launch, at what initial scope, and with what measurement. A sample service-family design for platform might read: “Platform — yes at launch. Initial scope: shared model-inference layer with multi-provider support (Anthropic, OpenAI, Bedrock), centralized grounding/retrieval infrastructure, shared evaluation harness, shared observability. Scope excludes business-unit-specific feature platforms (owned by spokes). Measurement: platform uptime (target 99.5%), monthly active business units consuming the platform (target all four within twelve months), average time-to-first-inference for a new use case (target under 14 days).”

Produce the five service-family designs. Any family you mark as “not at launch” gets a one-sentence rationale naming why and when it might be added.

Exercise 4 — Size the staffing (15 minutes)

Using the service catalogue from Exercise 3, size the CoE’s initial staffing. Name each role type, the initial headcount at that role, and the rationale. The total should be consistent with the scope — a platform-heavy CoE serving four business units with the described infrastructure needs roughly 15 to 30 headcount in its first eighteen months; a smaller scope supports smaller staffing.

A sample staffing design might read: “Platform engineering — 6 roles (2 senior engineers, 3 engineers, 1 platform PM). Rationale: multi-cloud inference platform with grounding infrastructure and observability needs sustained engineering to operate; smaller staffing would produce a platform that cannot serve four business units. Standards/governance — 3 roles (1 head of AI governance, 1 senior specialist for EU AI Act compliance, 1 specialist for model risk). Rationale: EU AI Act exposure justifies dedicated specialist capability; fewer roles would force the CoE to outsource governance, creating a dependency. Advisory — 3 roles (2 senior practitioners, 1 practice lead). Rationale: four business units with active AI work need on-call specialist time; fewer roles would produce a queue. Enablement — 2 roles (1 training lead, 1 community manager). Rationale: citizen-AI programme and spoke development require dedicated enablement investment. Leadership — 1 CoE director. Total initial headcount: 15.”

Exercise 5 — Measurement framework (15 minutes)

Using the four measurement categories from Article 4 (output, consumption, satisfaction, outcome), design the CoE’s measurement framework. For each category, name the two to three specific metrics and their initial targets.

A sample framework might read: “Output — platform uptime (target 99.5%), standards published per quarter (target 2-3), advisory engagements completed per quarter (target 20-30). Consumption — monthly active business units on the platform (target all four by month 12), platform inference volume per business unit (tracked for showback), standards-compliance rate for deployed systems (target 95%). Satisfaction — CoE internal NPS score collected quarterly from business-unit consumers (target above 30), advisory engagement rating (target above 4.0/5.0). Outcome — time to first production AI deployment for new use cases built on the platform (target 90 days), governance incident rate for CoE-governed systems (target near-zero for high-risk classification), business-unit AI practitioner retention (target above 85% annual).”

The measurement framework is the artifact that will be reported against in the monthly operations review; the specialist who designs it carefully produces the structure the sponsor will trust for the next several years.

Exercise 6 — Name the top three risks (10 minutes)

For the CoE design you have produced, name the top three risks to its successful first two years. For each risk, write two sentences: one naming the risk and one naming the design choice in your Blueprint that addresses it.

A sample risk might read: “Technology subsidiary independence. The technology subsidiary has a mature product-embedded AI practice that may resist standards enforcement from a newly stood-up central CoE. Mitigation — the service-catalogue design makes platform, standards, and enablement opt-in for the subsidiary for the first year, with a tier-tested governance gate only for cross-subsidiary data flows, giving the subsidiary the autonomy it has earned while establishing the CoE’s legitimacy over eighteen months.”

Debrief

The lab produces a five-page CoE scoping document — the kind of working artifact a CoO receives from an operating-model specialist at the end of a four-to-six-week scoping engagement. A well-run debrief compares how different learners approached the archetype decision, how their staffing sizes varied for similar scope, and where their measurement frameworks converged or diverged. The most instructive feedback from an instructor addresses the question of whether each learner’s CoE would actually function in the Meridian context, or whether it was designed for a different organization. A CoE design that is internally coherent but does not match the sponsor’s organization is the most common failure mode in early-career operating-model work.


Q-RUBRIC self-score: 88/100

© FlowRidge.io — COMPEL AI Transformation Methodology. All rights reserved.