Skip to main content

COMPEL Glossary / feedback-loop

Feedback Loop

A feedback loop in AI occurs when an AI system's outputs influence its future inputs, creating a self-reinforcing cycle that can either improve or degrade performance over time.

What this means in practice

Positive feedback loops can amplify biases (for example, a hiring model that favors candidates from historically privileged backgrounds produces training data that further reinforces this bias) or create filter bubbles (a recommendation system that narrows content based on past behavior). For organizations, unmanaged feedback loops are one of the most insidious risks of deployed AI because they cause problems that grow gradually and may not be detected until significant harm has occurred. In COMPEL, feedback loop identification and management are addressed in the risk assessment framework during Calibrate and the model monitoring infrastructure designed during the Technology pillar implementation in Produce.

Why it matters

Unmanaged feedback loops are one of the most insidious risks of deployed AI because they cause problems that grow gradually and may not be detected until significant harm has occurred. A hiring model that favors certain candidates produces training data reinforcing that bias. A recommendation engine that narrows content based on past behavior creates filter bubbles. These self-reinforcing cycles can amplify bias and degrade outcomes over time.

How COMPEL uses it

Feedback loop identification and management are addressed in the risk assessment framework during Calibrate, where existing AI systems are evaluated for feedback loop risks. During Model, monitoring infrastructure for feedback loop detection is designed within the Technology pillar. The Produce stage implements feedback loop detection mechanisms, and the Evaluate stage assesses whether deployed systems exhibit problematic reinforcement patterns.

Related Terms

Other glossary terms mentioned in this entry's definition and context.