Skip to main content

COMPEL Glossary / token

Token

A token is the basic unit of text that a language model processes, roughly corresponding to a word or word fragment (typically 3-4 characters in English).

What this means in practice

The sentence 'The cat sat on the mat' contains approximately 7 tokens. Token counts are important for two practical reasons: they determine costs (LLM providers charge per token for both input and output), and they constrain context windows (models have maximum token limits for how much text they can process at once). For enterprise AI budgeting, understanding token economics is essential: a customer service agent processing 10,000 conversations per day, each averaging 2,000 tokens, generates 20 million tokens daily. At typical commercial rates, this translates to meaningful operational costs that must be factored into business cases.

Why it matters

Token counts directly determine both costs and capability constraints for language model applications. Understanding token economics is essential for enterprise AI budgeting: a customer service agent processing 10,000 conversations daily at 2,000 tokens each generates 20 million tokens daily with meaningful cost implications. Organizations that fail to model token consumption accurately face budget overruns that undermine business cases.

How COMPEL uses it

Token economics factor into business case development during the Model stage and are monitored as part of AI FinOps during the Evaluate stage. The Technology pillar assesses token consumption patterns as part of infrastructure planning in Module 3.3. COMPEL's cost modeling guidance in Module 2.5 requires realistic token consumption estimates for every generative AI use case in the portfolio.

Related articles in the Body of Knowledge

Related Terms

Other glossary terms mentioned in this entry's definition and context.