All
Insurance
Lazarus AI

AI-II: Designing the Right Balance Between Humans and Systems

Insight 06: Examining the role of human-in-the-loop (HITL) and system-in-the-loop (SITL) in AI adoption

May 13, 2026
4 MIN READ

Forward

To further AI understanding and adoption in the insurance industry, Lazarus AI is producing a series of articles titled Artificial Intelligence – Insights for Insurance (“AI-II”). This article examines the role of human-in-the-loop (HITL) and introduces a complementary concept, system-in-the-loop (SITL), to better frame how AI should be integrated into real-world processes.

This perspective is grounded in practical experience across enterprise deployments. Future articles will explore implementation in greater depth.

Introduction

AI has triggered both excitement and concern across industries. Given its potential impact, one might expect that defining the balance between human judgment and automated systems would be the first step in designing AI solutions.

In practice, it often is not.

Instead, many implementations focus first on capability and only later consider where human oversight should exist. This sequence creates unnecessary risk and weakens the overall system design.

Looking back at history, technology has always challenged assumptions about what only humans can do. The printing press redefined authorship, calculators reshaped computation, and AI continues this pattern—just at a much broader scale. As capability expands, discomfort follows, and with it, a tendency to avoid clearly defining the role of humans in the system.

HITL and SITL: A Better Framing

The term HITL is widely used to describe scenarios where humans intervene in AI-driven processes. It is often interpreted as a safeguard, introduced after automation has been designed.

A more useful framing is SITL: system-in-the-loop. This perspective starts from the assumption that humans own the process and deliberately determine where and how AI should be applied. The system is inserted into a human-driven workflow, not the other way around.

The distinction is subtle but important. HITL can imply that human involvement is reactive. SITL emphasizes intentional design.

Regardless of terminology, the underlying principle remains the same: the interaction between human judgment and system capability must be defined upfront.

Not a Limitation, but a Requirement

There is a persistent misconception that requiring human involvement signals an incomplete or less capable AI system. That view is rooted in earlier digital transformation efforts where reducing human interaction was often the goal.

AI changes that equation.

In AI-driven systems, human and system interaction is not a flaw—it is a design requirement. Stronger AI capabilities increase, rather than eliminate, the need for thoughtful integration. The question is not whether humans should be involved, but where their involvement creates the most value.

Consider claims processing. Many tasks can be automated with high accuracy, but certain moments (such as beneficiary communication) benefit from human engagement regardless of technical capability. The optimal design is not fully automated or fully manual, but a deliberate combination of both.

Designing the Right Balance

The balance between human and system involvement will vary by process.

High-risk or sensitive workflows often require greater human oversight. Lower-risk, high-volume tasks may rely more heavily on automation. In all cases, the goal is not to eliminate one side of the equation, but to define how both contribute.

Another factor is the quality of the underlying process. Applying AI to a poorly designed workflow will not produce optimal results. In those cases, higher levels of human involvement may be necessary until the process itself is improved.

Ultimately, the broader pattern is clear. Every future process will include both a system component and a human component. The proportion of their respective contribution will change, but the combination will remain.

Summary

HITL and SITL are not competing ideas. They are different ways of describing the same requirement: AI systems must be designed with a clear understanding of how humans and technology interact.

Organizations that treat this balance as an afterthought will introduce unnecessary risk. Those that define it deliberately will build systems that are more reliable, more trusted, and more effective in production.

About Lazarus AI

Lazarus AI develops enterprise-grade AI systems for the insurance industry, public sector, and beyond. Our Applied Intelligence Engine (AIE) enables organizations to eliminate their processing bottlenecks and provides rapid time to value, allowing our customers to compete more effectively with reduced cost, lower risk, and greater speed.