COMPEL Glossary / parameter
Parameter
A parameter is a learned numerical value within an AI model that is adjusted during training to improve the model's ability to make accurate predictions.
What this means in practice
In a neural network, parameters are the weights assigned to connections between neurons -- during training, these weights are iteratively refined to minimize prediction errors. Modern large language models contain enormous numbers of parameters: GPT-3 has 175 billion, GPT-4 is estimated to have over a trillion, and parameter counts continue to grow. Parameter count is a rough indicator of model capability (larger models generally perform better on more tasks) and computational cost (more parameters require more compute for training and inference). For transformation leaders, parameter count helps contextualize model costs and infrastructure requirements: serving a 7-billion-parameter model requires very different infrastructure than a 175-billion-parameter model.
Why it matters
Parameter counts directly determine AI infrastructure costs and deployment requirements. Understanding that serving a 7-billion-parameter model requires fundamentally different infrastructure than a 175-billion-parameter model helps leaders make informed decisions about technology investments. This knowledge also enables more critical evaluation of vendor claims about model capabilities and realistic assessment of compute budgets.
How COMPEL uses it
During the Model stage, parameter count informs infrastructure planning and cost estimation within the Technology pillar. The Calibrate stage assesses whether existing compute infrastructure can support required model sizes. COMPEL's AI FinOps guidance in Module 3.3 uses parameter-aware cost modeling to build realistic budgets that account for training and inference compute requirements.
Related Terms
Other glossary terms mentioned in this entry's definition and context.