The COMPEL Glossary Graph visualizes relationships between framework terminology, showing how concepts interconnect across domains, stages, and pillars. Term nodes cluster by pillar affiliation while cross-references reveal semantic dependencies — for example, how risk appetite connects to control effectiveness, model governance, and assurance requirements. This network representation helps practitioners navigate the framework vocabulary and understand that COMPEL terminology forms a coherent conceptual system rather than isolated definitions.
COMPEL Glossary / observability-for-ai
Observability for AI
The set of telemetry — prompt/response capture, retrieval traces, tool-call records, token-cost metrics, and evaluation signals — that makes an AI system operable and auditable.
What this means in practice
Distinct from classical application observability because it must capture semantic content (not just structural events) and privacy-aware redaction.
Synonyms
AI observability , GenAI observability , LLM observability
See also
- AI trace — A span hierarchy — client → orchestration → retrieval → model → tool — capturing a single AI request end-to-end, including prompts, responses, tool calls, and token usage.
- SLI/SLO for AI — Service-level indicators and objectives for AI systems — including evaluation score, per-task cost, and goal-achievement rate alongside classical availability/latency.
- Evaluation harness — The infrastructure that runs capability, regression, safety, and human-review evaluations on an LLM feature on a defined cadence.
- Agent observability — The logging, tracing, and evaluation infrastructure that makes an agent's plans, tool calls, memory reads/writes, and decisions auditable after the fact.