COMPEL Glossary / human-in-the-loop
Human-in-the-Loop
Human-in-the-loop (HITL) is an AI system design pattern where a human must actively review, approve, and authorize each AI decision or action before it is executed, providing maximum human oversight at the cost of reduced speed and scalability.
What this means in practice
HITL is typically required for high-risk decisions such as medical diagnoses, criminal justice recommendations, financial underwriting, and any application where errors could cause significant harm to individuals. For organizations, HITL design requires investment in review interfaces, reviewer training, workload management, and quality assurance of the review process itself. In COMPEL, HITL is positioned at one end of the autonomy spectrum in Module 3.4, Article 11, with governance frameworks calibrating the appropriate level of human involvement based on risk, regulatory requirements, and organizational readiness.
Why it matters
For high-risk decisions in medical diagnosis, criminal justice, and financial underwriting, human approval before execution provides maximum protection against AI errors that could cause significant harm. However, HITL designs require investment in review interfaces, reviewer training, and workload management. Organizations must also ensure reviewer quality because overwhelmed reviewers who approve everything without genuine review provide no actual protection.
How COMPEL uses it
HITL is positioned at one end of the autonomy spectrum in Module 3.4, Article 11. During Model, the governance framework calibrates whether HITL is appropriate based on risk, regulatory requirements, and organizational readiness for each AI application. The Produce stage designs review interfaces and reviewer workflows. The Evaluate stage monitors reviewer accuracy and throughput to ensure oversight remains meaningful rather than performative.
Related Terms
Other glossary terms mentioned in this entry's definition and context.