Skip to main content

COMPEL Glossary / edge-deployment

Edge Deployment

Edge deployment refers to running AI models on devices located close to where data is generated -- factory equipment, IoT sensors, retail stores, or branch offices -- rather than in a centralized cloud.

What this means in practice

Edge deployment reduces latency (enabling real-time inference where network delays would be unacceptable), reduces bandwidth costs (processing data locally instead of transmitting it), and enables operation without constant internet connectivity. Manufacturing quality inspection, autonomous vehicle decisions, and retail shelf monitoring are common edge AI applications. Edge deployment adds complexity to governance because models running on distributed devices are harder to monitor, update, and audit than centralized cloud models. The COMPEL maturity model assesses edge deployment capability in Domain 12 (Integration Architecture) at Level 3.5 and above.

Why it matters

Running AI models on devices close to data sources reduces latency, cuts bandwidth costs, and enables operation without constant internet connectivity. However, edge deployment adds governance complexity because models on distributed devices are harder to monitor, update, and audit than centralized cloud models. Organizations must balance these operational benefits against the increased difficulty of maintaining governance rigor across distributed infrastructure.

How COMPEL uses it

The COMPEL maturity model assesses edge deployment capability in Domain 12 (Integration Architecture) at Level 3.5 and above. During Model, edge deployment requirements are designed into the Technology pillar architecture. The Produce stage implements deployment pipelines for edge devices with governance controls, and the Evaluate stage monitors edge model performance and version consistency across distributed infrastructure.

Related Terms

Other glossary terms mentioned in this entry's definition and context.