Skip to main content

COMPEL Glossary / edge-computing

Edge Computing

Edge computing is the practice of processing data near its source (at the 'edge' of the network) rather than sending all data to a centralized cloud data center, enabling low-latency AI inference, reduced bandwidth consumption, and operation in environments with limited or intermittent connectivity.

What this means in practice

Common AI edge applications include real-time manufacturing quality inspection, autonomous vehicle decision-making, smart building management, and medical device AI. For organizations, edge computing introduces additional governance complexity because models deployed to edge devices are harder to update, monitor, and audit than centrally hosted models. In COMPEL, edge computing is assessed as part of the Technology pillar and covered in the platform architecture patterns of Module 3.3, with industry-specific edge AI applications discussed in Module 2.6.

Why it matters

Edge computing enables AI capabilities in environments where cloud connectivity is limited, latency requirements are stringent, or data sovereignty rules prohibit centralized processing. Manufacturing quality inspection, autonomous vehicles, and medical devices all require real-time AI inference that cloud processing cannot deliver. However, edge deployment introduces governance complexity because distributed models are harder to update, monitor, and audit.

How COMPEL uses it

Edge computing is assessed as part of the Technology pillar during Calibrate, evaluating the organization's ability to deploy and govern distributed AI. During Model, edge architecture decisions are made based on latency requirements and regulatory constraints using patterns from Module 3.3. Industry-specific edge AI applications are discussed in Module 2.6, providing sector-specific guidance for healthcare, manufacturing, and retail deployments.

Related Terms

Other glossary terms mentioned in this entry's definition and context.