Skip to main content
AITE M1.2-Art36 v1.0 Reviewed 2026-04-06 Open Access
M1.2 The COMPEL Six-Stage Lifecycle
AITF · Foundations

Architect in Calibrate and Organize Stages for Agentic Systems

Architect in Calibrate and Organize Stages for Agentic Systems — Transformation Design & Program Architecture — Advanced depth — COMPEL Body of Knowledge.

10 min read Article 36 of 53

This article covers what the architect contributes, what artefacts they produce, and what gate-review conversations they lead. The deliverables are concrete: specific documents with specific fields, produced on specific timelines, reviewed by specific stakeholders.

Calibrate — what the architect contributes

Calibrate is the stage where the organisation decides whether to pursue the agentic initiative at all, at what scope, and against what risk. The architect contributes four things.

Calibrate contribution 1 — Autonomy-scope proposal

The autonomy-scope proposal declares the intended autonomy level (L0–L5 per Article 2) and the business rationale. Fields:

  • Proposed autonomy envelope. What the agent does autonomously; what requires HITL; what is out of scope.
  • Level on the spectrum. L0 through L5 with justification.
  • Alternative autonomy scopes considered. Why not L1? Why not L3? What was the tradeoff?
  • Expansion path. If this succeeds, what higher-autonomy expansion might follow?
  • Contraction path. Under what conditions would the organisation reduce autonomy post-launch?

The autonomy-scope proposal is the architect’s anchor at the Calibrate gate. Most other decisions flow from it.

Calibrate contribution 2 — Blast-radius assessment

Blast-radius is the maximum harm the agent can cause if it misbehaves. The architect produces a blast-radius assessment covering:

  • Per-session blast radius. Maximum money moved, records changed, messages sent, compute spent.
  • Per-day blast radius. Aggregate across all sessions.
  • Persistence blast radius. What survives the session — memory writes, logged claims, sent communications.
  • Reputational blast radius. Worst-case public incident given this scope.
  • Regulatory blast radius. Exposure across EU AI Act classification, sector regulation, data protection.

The blast-radius assessment informs the controls in Organize; a high-blast-radius design justifies heavier controls.

Calibrate contribution 3 — Prohibited-use screen (Article 5)

For any proposed agent, the architect screens against EU AI Act Article 5 prohibited uses (especially relevant in public-sector contexts — Article 32) and against the organisation’s own prohibited-use list. Output: a one-paragraph finding (clear / not clear / out of scope) with citations.

Calibrate contribution 4 — Preliminary framework and runtime assessment

The architect indicates likely framework and runtime choices (LangGraph, CrewAI, AutoGen, OpenAI Agents SDK, Semantic Kernel, LlamaIndex Agents, custom — Article 11). This is not a final commitment; Organize makes that. But a credible Calibrate signal helps business sponsors understand feasibility and cost.

Calibrate gate-review conversation

Typical Calibrate gate agenda for an agentic initiative:

  1. Use case and business rationale (product lead).
  2. Autonomy-scope proposal (architect).
  3. Blast-radius assessment (architect + risk).
  4. Article 5 prohibited-use finding + EU AI Act preliminary classification (architect + compliance).
  5. Preliminary framework/runtime (architect).
  6. Go/no-go decision with conditions.

The architect’s job in the conversation is to represent the agentic-specific dimensions clearly — especially where stakeholders are pattern-matching from non-agentic ML and missing delta. “This looks like our fraud model from 2021” is a common opening misstep; the architect’s response is to point to autonomy, blast-radius, and Article 14 obligations that distinguish.

Organize — what the architect commits

Organize is where the architecture is committed. Business rationale is fixed; the technical path forward is chosen. The architect produces the reference architecture and a set of named decisions.

Organize contribution 1 — Reference architecture

The reference architecture document (Article 20 pattern) shows how the agent fits into existing systems:

  • Runtime chosen (with the options considered and the rejection rationale).
  • Model provider(s) and model class (primary + fallback).
  • Tool set (with references to the tool registry — Article 26).
  • Memory design (Article 7; isolation mode; retention).
  • Safety layer (input, tool, memory, egress planes — Article 27).
  • Observability stack (Article 15).
  • Evaluation harness (Article 17).
  • Kill-switch architecture (Article 9).
  • Data flow document (Article 28).
  • Deployment model (same-cluster / segregated / per-tenant).

Organize contribution 2 — Architecture decision records

Each significant decision gets an Architecture Decision Record (ADR) — a short document capturing context, options considered, decision, consequences. Typical Organize ADRs:

  • “Why LangGraph over CrewAI for this agent.”
  • “Why Anthropic Claude as primary model, OpenAI GPT-4 as fallback.”
  • “Why MCP for tools rather than custom wrappers.”
  • “Why per-tenant memory schema rather than RLS.”
  • “Why synchronous HITL at $500 rather than $250 or $1000.”

ADRs are durable; they survive team changes and become the rationale record for Article 14 evidence years later.

Organize contribution 3 — Platform-services commitments

If the organisation operates COE + federated (Article 35), Organize is where the COE commits to specific platform services the product team will consume, and where the product team commits to consuming them rather than reimplementing. Examples:

  • COE commits: OTEL-instrumented runtime; tool-registry integration; shared policy engine.
  • Product team commits: not to maintain its own sandbox; to file new-tool requests through the registrar.

Organize contribution 4 — Human-oversight design (Article 14 first draft)

Even though full Article 14 evidence is Produce-stage work, Organize produces the first-draft oversight design:

  • Who the overseer is.
  • What tools and displays they have.
  • What decisions are HITL.
  • What runbooks are prepared.
  • How the overseer is trained.

The Model and Produce stages (Article 37) will firm these up; Organize establishes the spine.

Organize contribution 5 — Budget and SLO commitments

Cost (Article 19) and SLO (Article 18) commitments surface in Organize. The architect partners with finance and ops to declare:

  • Target cost per interaction.
  • Latency SLO + availability SLO.
  • Evaluation-quality SLO thresholds.
  • Operating-cost ceiling.

Organize gate-review conversation

Typical Organize gate agenda:

  1. Reference-architecture readout (architect).
  2. ADR walkthrough of the high-impact decisions (architect).
  3. Platform-services contract (architect + COE or platform lead).
  4. Data-flow document and DPIA reference (architect + DPO).
  5. Preliminary evaluation plan (architect + evaluation lead).
  6. SLO + cost commitments (architect + SRE + finance).
  7. Go/no-go with named deliverables for Model.

Bridge from business intake to Organize-exit architecture

The transformation from “business sponsor asks for an agent that does X” to “committed reference architecture with ADRs and platform-services contract” is the architect’s value-add in Calibrate + Organize. The architect’s discipline turns vague intent into specific, reviewable commitments.

Common Calibrate and Organize challenges and the architect’s response

Challenge 1 — “We want to match the competitor’s agent.” Business sponsors sometimes arrive at Calibrate with a competitor’s feature list as the requirements document. The architect’s job is to surface the regulatory and blast-radius delta: the competitor’s agent may be in a different regulatory regime, may have been built with different data rights, or may be carrying risks not visible externally. Matching feature sets is not a plan; matching outcomes within the organisation’s constraints is.

Challenge 2 — “The vendor said it would take six weeks.” Vendor timelines assume generic deployment; your deployment is not generic. The architect’s Calibrate output is the first check on vendor timelines — if the conformity assessment alone takes eight weeks, the six-week total is fictional.

Challenge 3 — “Can we skip Organize?” Teams with Model-stage build impulses sometimes ask to compress or skip Organize. The architect pushes back: Organize is where ADRs, platform-services contracts, and data-flow documents live. Skipping Organize turns these into retroactive documentation, which is always weaker and often unusable as Article 14 evidence.

Challenge 4 — “Does the architect have to attend every gate?” Yes for the agent’s first pass; for agents built on well-understood patterns the architect can delegate to a product-embedded architect (Article 35) with periodic COE architect reviews. The senior architect’s time is reserved for novel patterns, cross-portfolio decisions, and high-stakes use cases.

Challenge 5 — “Our frameworks are still fluid; can we commit anyway?” Yes. Organize commitments can include “revisit at Produce” clauses on fast-moving choices (model version; some tools). But autonomy envelope, HITL design, and regulatory classification commit at Organize and do not move without a gate reopen.

Real-world references

CIO.com agentic-deployment case studies. CIO.com publishes regular case studies on enterprise agentic deployments including the Calibrate-stage analysis and Organize-stage decisions (framework choice, platform approach, operating model). These case studies are useful benchmarks for the architect’s readouts.

Gartner Magic Quadrant AI-agent platform materials (where available). Cited as an industry reference rather than methodological authority; useful at Organize for framework/runtime benchmarking.

Public Amazon agentic-architecture discussions (re:Invent talks). AWS re:Invent sessions on Bedrock Agents, Amazon Q, and agentic architectures walk through architecture decisions similar to those the architect makes at Organize.

Anti-patterns to reject

  • “We’ll figure out autonomy scope during Produce.” Autonomy scope is the architect’s primary Calibrate contribution; deferring it breaks the gate.
  • “Blast radius is a security concern.” Blast radius is an architectural concern; security partners on it but the architect owns it.
  • “Reference architecture is a Model-stage artefact.” A reference architecture at Model is late; Organize is where it’s committed.
  • “Platform services will emerge.” They will not emerge without a commitment made at Organize.
  • “Budget and SLO come from SRE.” They partner; the architect owns the commitment at the gate.

Learning outcomes

  • Explain the architect’s four Calibrate contributions and five Organize contributions, and the artefacts each produces.
  • Classify four architect inputs (autonomy-scope proposal, reference architecture, ADRs, platform-services commitments) by gate where they are produced.
  • Evaluate a Calibrate submission for autonomy-scope clarity, blast-radius adequacy, and Article 5 screen completeness.
  • Design the architect’s Calibrate and Organize readouts for a given agentic initiative, including the specific artefacts, gate-conversation structure, and commitments.

Further reading

  • Core Stream anchors: EATF-Level-1/M1.2-Art01-Calibrate-Establishing-the-Baseline.md; EATF-Level-1/M1.2-Art02-Organize-Building-the-Transformation-Engine.md.
  • AITE-ATS siblings: Article 2 (autonomy), Article 9 (kill-switch), Article 20 (platform), Article 37 (Model + Produce), Article 38 (Evaluate + Learn).
  • Primary sources: CIO.com enterprise agentic deployments coverage; Gartner AI-agent-platform materials; AWS re:Invent Bedrock Agents sessions (2023–2024).