Skip to main content

COMPEL Glossary / llm-risk-surface

LLM risk surface

The union of six interacting layers — input, model, output, retrieval, tool, and data — where governance controls must be applied on any LLM-based feature.

What this means in practice

Distinct from classical application attack surface because generation, context injection, and tool-use expand the failure modes.

Synonyms

LLM attack surface , LLM failure surface , generative-AI risk surface

See also

  • Excessive agency — A failure mode in which an LLM has been wired into tools and permissions whose blast radius exceeds what its supervision and validation logic can safely bound.
  • System prompt leakage — Extraction of an LLM feature's hidden system prompt and structural instructions through crafted user input.
  • Guardrail — A control placed between the user or environment and an LLM that blocks, rewrites, or classifies content at one of four architectural layers: input filter, policy filter, output filter, or tool-call validator.

Related articles in the Body of Knowledge