Insurance
All
Lazarus AI

AI-II: Insurance AI Must be Explainable

Insight 03: Why explainability in AI is not only a reasonable expectation, but should be expected

May 13, 2026
5 MIN READ

Forward

To further AI understanding and adoption in the insurance industry, Lazarus is producing a series of articles titled Artificial Intelligence – Insights for Insurance, or “AI-II.” The viewpoints expressed in this insight come from a combination of Lazarus’ expertise in large language models (LLMs) and our hands-on experience.

Introduction

For insurance industry veterans, the premise of this article may not be unusual at all but expected. However, there remains a persistent narrative that AI must be treated as a “black box,” where outcomes cannot be meaningfully explained. This perception may be a reason some insurers have not pursued AI solutions to date. In this Insight, we will establish why explainability is a reasonable expectation and how the right technology partner can assist in AI regulation and understanding.

Explainability Explained

In the past few years, AI has moved to the forefront of insurance industry conversations. Discourse has been generally positive with some insurers already ideating, testing, and implementing AI tools with good results. This insight will address how CEOs, board members, and other business leaders need to think about the expectation of explainability in AI.

Positioning “explainability” as the end goal places responsibility on those building and deploying AI systems (e.g. technologists, data scientists, and solution providers) to ensure outputs can be understood and validated. By contrast, positioning “transparency” as the primary goal can imply that end users must interpret complex underlying mechanics themselves.

Explainability allows users to play their roles and data scientists to play theirs.


Leadership Expectations for Explainability 

A common anti-pattern in AI adoption is the assertion that “AI is a black box,” meaning that outcomes from AI models cannot be explained and that expecting explanation is unreasonable.

In recent years, some solution providers have defaulted to this framing, suggesting that AI outputs are too complex to be understood outside of technical domains. In practice, this erodes trust and creates unnecessary barriers to adoption.

A more useful framing can be drawn from how human decisions are evaluated.

The human brain is the most complex system we know. It contains billions of neurons, and the full mechanics of human decision-making are still not completely understood. Yet in insurance, and in any regulated environment for that matter, this complexity does not exempt decisions from explanation.

When a decision is made, it must be explained in a way that stakeholders can understand. It would be unreasonable to argue that a decision cannot be explained because of neurological complexity. It would be equally unreasonable to require stakeholders to become neuroscientists in order to interpret that decision. Further, regulators, executives, and stakeholders do not require visibility into every internal computation. They require accountability and a clear, defensible explanation of outcomes and decisions.

The same standard can be applied to AI: AI outputs must be explainable to non-technical stakeholders.

Meeting this standard requires clear role definition for both insurers and technology partners. Insurers are responsible for evaluating and acting on outcomes, while technology partners are responsible for ensuring those outcomes can be explained, validated, and governed. This replaces two unproductive extremes—complete opacity and unrealistic expectations of user-driven interpretation—with a pragmatic model grounded in accountability.

It is important to note that not all solution providers take the view that AI outcomes must be opaque. Legitimate AI companies recognize the importance of explainability and actively design for it, including alignment with regulatory expectations.

To be fair to technology solution providers, not all have taken the view that all AI outcomes must be opaque. Legitimate AI product firms will recognize the importance of explainability and actively design for it, including alignment with regulatory expectations. Just because explainability is extremely difficult is no reason to set a different standard. Insurers should explicitly evaluate how technology partners approach explainability, including how explanations are generated, validated, and maintained over time. If alignment is not present, it is a signal to reassess the partnership.

Summary

Explainability is a foundational requirement for AI adoption in the insurance industry.

As regulatory frameworks continue to evolve, insurers should establish clear expectations for explainability in AI systems, hold technology partners accountable for delivering interpretable, defensible outputs, and ensure that AI-driven decisions can be understood and validated by non-technical stakeholders.

AI does not need to be a black box. Treating it as one is a choice, not a constraint.

About Lazarus AI

Lazarus AI develops enterprise-grade AI systems for the insurance industry, public sector, and beyond. Our Applied Intelligence Engine (AIE) enables organizations to eliminate their processing bottlenecks and provides rapid time to value, allowing our customers to compete more effectively with reduced cost, lower risk, and greater speed.