COMPEL Glossary / human-on-the-loop
Human-on-the-Loop
Human-on-the-loop (HOTL) is an AI system design pattern where the AI makes and executes decisions autonomously, but a human monitors the process through dashboards and alerts and can intervene to override, pause, or adjust the system when anomalies or problems are detected.
What this means in practice
HOTL provides a middle ground between the maximum oversight of human-in-the-loop and the full independence of autonomous operation, suitable for medium-risk applications where speed matters but human correction must remain possible. For organizations, HOTL requires investment in effective monitoring interfaces, clear intervention protocols, and training for monitors to recognize when intervention is needed. In COMPEL, HOTL is the intermediate position on the autonomy spectrum detailed in Module 3.4, Article 11.
Why it matters
Human-on-the-loop provides a middle ground between maximum oversight and full autonomy, suitable for medium-risk applications where speed matters but human correction must remain possible. This design pattern enables AI to operate autonomously for routine decisions while keeping humans engaged through dashboards and alerts to catch anomalies. Organizations gain efficiency from automation while maintaining the ability to intervene when problems arise.
How COMPEL uses it
HOTL is the intermediate position on the autonomy spectrum detailed in Module 3.4, Article 11. During Model, HOTL is selected for AI applications where the risk profile warrants monitoring but not pre-approval. The Produce stage implements monitoring interfaces and clear intervention protocols. The Evaluate stage measures monitor response times and intervention accuracy to verify that the oversight pattern is functioning as designed.
Related Terms
Other glossary terms mentioned in this entry's definition and context.