Skip to main content

COMPEL Glossary / batch-processing

Batch Processing

Batch processing involves running AI model predictions on large volumes of data at scheduled intervals rather than in real time.

What this means in practice

For example, a churn prediction model might process the entire customer base overnight and generate risk scores that are available the next morning, or a document classification system might process all invoices received during the day in a single batch run each evening. Batch processing is appropriate when immediate results are not required and when processing efficiency is more important than latency. It typically costs less than real-time processing because compute resources can be provisioned and released on a schedule. In the COMPEL integration architecture assessment (Domain 12), both batch and real-time processing capabilities are evaluated, as a mature enterprise AI portfolio typically requires both patterns.

Why it matters

Batch processing enables organizations to run AI predictions on large data volumes at scheduled intervals, optimizing for cost efficiency when immediate results are not required. Organizations that understand the trade-offs between batch and real-time processing can design AI architectures that balance performance requirements with infrastructure costs. A mature AI portfolio typically requires both patterns for different use cases.

How COMPEL uses it

Both batch and real-time processing capabilities are evaluated during the Calibrate stage within the Technology pillar's integration architecture assessment (D12). The Model stage designs processing patterns matched to use case requirements. During Produce, batch pipelines are implemented with scheduling, monitoring, and error recovery. The Evaluate stage measures processing reliability and whether the chosen pattern delivers adequate performance for each business use case.

Related Terms

Other glossary terms mentioned in this entry's definition and context.