All
Insurance
Ben Bomhoff

AI in Insurance: Ethics Drives Outcomes, Not Just Compliance

How ethical design shapes trust, consistency, and long-term adoption of AI.

April 20, 2026
7 MIN READ
A NOTE FROM THE AUTHOR

I was fortunate enough to be able to participate on a panel at ITI Europe 2026 discussing a topic that is so important and often overlooked: Ethics in AI.

This article is intended to document some of the main points we covered in our discussion. 

I want to thank the impressive panel participants: 

The diverse backgrounds of this group provided a compelling and insightful discussion around the entire Ethics in AI topic. 

Our moderator, Paolo Cuomo, was able to help us cover a huge amount of important information within our allotted 40 minutes.

It was a pleasure to work with this group!

Ben Bomhoff, Head of Insurance Solutions at Lazarus AI

The Conversation Is Shifting

Over 70% of insurers are already deploying AI in at least one business function, yet more than 60% cite governance, risk, and trust as their primary barriers to scaling it (McKinsey, 2024; Deloitte, 2024). At the same time, the global AI in insurance market is projected to exceed $150B by 2034, signaling rapid adoption without equally mature oversight (Fortune Business Insights, 2026).  

Early conversations around AI were dominated by capability: what it could automate, accelerate, and optimize. The focus was on performance, often with less attention paid to how those outcomes were produced.

However, as AI systems move into core insurance workflows, the risks are becoming more visible. Inconsistent decisions, lack of transparency, and unintended bias are no longer edge cases, but operational realities that directly impact customers and business outcomes—hence the importance of ethical AI governance.

Ethical AI governance refers to the structures, processes, and standards used to ensure AI systems operate responsibly in practice. This includes defining acceptable outcomes, monitoring for bias and performance drift, ensuring transparency in decision-making, and establishing clear accountability across teams. It is not a single control, but an ongoing system of oversight that spans how AI is designed, deployed, and continuously evaluated.

Without clear governance, even small inconsistencies compound into financial, regulatory, reputational, and security risk. According to McKinsey’s State of AI report, nearly half of organizations using generative AI report experiencing at least one negative consequence, including inaccurate outputs, compliance issues, and data exposure events. These risks are no longer theoretical, driving home the need for investment in robust and ethical governance programs to avoid such lapses, speed remediation, and ensure business continuity. When it comes to deploying these types of technologies, ethics cannot be treated solely as a compliance requirement (i.e. a set of rules applied after a system is already built), but as the framework that determines whether AI systems can be trusted, scaled, and sustained.

This shift—from focusing on what AI can do to how it should behave—is what will ultimately determine whether these systems create lasting value or introduce risk at scale.

Ethics Extends Beyond Compliance

While compliance sets the baseline, ethics defines the standard.

AI systems can meet regulatory and compliance requirements and still produce outcomes that erode trust. For example, a pricing model may comply with regulations by excluding explicitly protected variables like race or gender, yet still generate discriminatory outcomes through proxy variables such as zip code, education level, or purchasing behavior. This has been observed in lending and insurance contexts, where compliant models still produced disparate impacts across demographic groups (Consumer Financial Protection Bureau; NAIC).

Treating compliance as the goal results in systems that are technically acceptable but operationally fragile. Treating ethics as the standard leads to different decisions:

  • Models are tested for outcomes, not just inputs
  • Teams evaluate disparate impact, not just regulatory alignment
  • Systems are designed for consistency and explainability, not just approval

This shift produces more stable systems, ones that perform not only within regulatory bounds, but also within the expectations of customers and stakeholders.

It is important to note that as AI is introduced at scale, patterns that were once diffuse may become consistent, repeatable, and harder to detect without intentional oversight. For this reason, ethics cannot be an afterthought. It must be embedded into how systems are designed and evaluated.

Mitigation By Design

AI risk in insurance extends beyond ethics alone. It includes:

  • Operational risk (model drift, data instability, system failures)
  • Regulatory risk (non-compliance, audit failures, evolving standards)
  • Reputational risk (loss of customer trust, public scrutiny)
  • Financial risk (mispriced policies, incorrect claims decisions)
  • Strategic risk (over-reliance on flawed or opaque systems)

These risks are interconnected, and mitigation must be designed accordingly. Research from NIST’s AI Risk Management Framework emphasizes that AI systems require ongoing monitoring due to performance shifts over time. As such, this process is not a single control or checkpoint, but an ongoing practice that spans people, processes, and systems. 

One emerging approach is the introduction of an intelligence layer between models and business workflows. Rather than relying on individual models to perform reliably in isolation, this layer structures how models are selected, how data is processed, and how outputs are generated and evaluated in an intentional, centralized, controlled, and auditable manner. 

For example, systems like the Applied Intelligence Engine (AIE) demonstrate how this layer can be operationalized in practice. Designed for enterprise-grade applications, the AIE introduces a configurable layer that guides the selection of models and ingestion tools, governs model behavior, reduces hallucination risk and technical debt, and ensures outputs are explainable and grounded in verifiable data.

This architecture directly addresses several categories of risk:

  • Operational and financial risk are reduced by additional processing layers and structuring how unstructured and multimodal data is interpreted (thereby improving consistency in output), as well as ensuring continuous visibility via explainability and auditability. Operational/business continuity risk is further mitigated by an architecture that remains insulated from core business processing systems, enabling greater flexibility and control. For example, this separation allows for rapid, low risk model changes without disrupting the basic operations of company systems.  
  • Reputational and regulatory risk are mitigated by grounding outputs in verifiable sources of truth, allowing decisions to be traced, explained, and validated against internal policies and external requirements.
  • Strategic risk is addressed by maintaining flexibility across models and providers, avoiding over-reliance on any single system and allowing organizations to adapt as technologies evolve.

Regardless of which approach is taken, risk mitigation practices must be holistic. From maintaining diversity in teams to ensuring that data validation, peer review, and model audits run alongside systems, these governance processes must cover the full lifecycle of AI. Without this alignment, mitigation becomes fragmented, ineffective and difficult to audit. 

Leadership Sets the Standard

Effective AI governance is neither a purely top-down nor bottom-up approach.

At the top, boards and executive leadership must set direction. This starts with AI literacy: a working understanding of how these systems function, where they fail, and how those failures translate into business risk. Without this foundation, oversight becomes reactive rather than intentional.

Executives define:

  • Where AI should and should not be used
  • What level of risk is acceptable
  • How fairness, accuracy, and accountability are measured

At the bottom, technical and operational teams translate these principles into practice through model design, evaluation, and deployment decisions.

The connection between the two is critical. The World Economic Forum emphasizes that organizations that successfully scale AI governance embed accountability, transparency, and oversight directly into workflows, system design, and decision-making, rather than treating governance as a separate control layer. This requires both clear executive ownership and operational integration across the organization. A holistic, organization-wide strategy is key to influencing and sustaining a long-term roadmap.

When leadership sets expectations but teams lack guidance, execution drifts. Conversely, when teams act without alignment, risk compounds. Sustainable AI adoption requires both layers working in tandem.

Defining Fairness Early

In insurance, fairness is not theoretical—it directly affects who gets coverage, at what price, and under what conditions. If fairness is not defined early, inconsistency follows. And once scaled, inconsistency becomes systemic risk.

But when defined intentionally, fairness creates opportunity.

For example:

  • Improved risk modeling using broader datasets can expand coverage to previously underserved populations, such as thin-file or historically excluded customers (McKinsey, 2021).
  • Behavior-based underwriting (e.g., telematics in auto insurance) can shift pricing from static proxies to real-world behavior, reducing reliance on biased historical indicators.
  • AI-driven claims processing can reduce subjective decision-making, leading to more consistent outcomes across similar cases.

These approaches have the potential to close long-standing gaps in access to insurance, particularly for populations that were historically mispriced or excluded due to limited or biased data.

Regulators, including the NAIC, have emphasized that insurers must evaluate not just intent, but outcomes, ensuring that AI does not unintentionally reinforce disparities.

Fairness, when built into system design, becomes a driver of both equity and growth—expanding markets while improving trust.

From Capability to Responsibility

AI is often framed as a transformational step forward in operations and growth. While that holds true, for the insurance industry, new capabilities without control introduces risk just as quickly as it creates value.

The organizations that succeed will not be the ones that solely adopt AI the fastest. They will be the ones that implement it with discipline and recognize the importance of a “human-in-the-loop” approach. This includes setting clear boundaries for use, focusing on well-defined problems, maintaining realistic expectations, and measuring outcomes (including fairness and accuracy) over time.

AI might feel like a shortcut, but it is not. It is a system that requires ongoing oversight and proper management.

The Path Forward

Ethics is sometimes positioned as a limitation on what AI can do. In practice, it is what enables AI to move from experimentation to production, and from isolated use cases to sustained value. Incorporating intelligence layers into AI systems allows for both effective application and proper governance of this rapidly evolving technology.

The question is not whether ethics will shape AI adoption in insurance—it already is. What now garners attention is whether organizations will treat ethics as a constraint, or as the system that makes scaling at the enterprise level possible.

Sources
  • McKinsey & Company (2024). The State of AI in 2024
  • Deloitte (2024). AI Governance in Financial Services
  • Fortune Business Insights (2026). AI in Insurance Market Size
  • National Association of Insurance Commissioners (NAIC). AI and Algorithmic Bias in Insurance
  • Consumer Financial Protection Bureau (CFPB). Algorithmic Bias and Fair Lending
  • NIST (2023). AI Risk Management Framework
  • World Economic Forum (2026). Why Effective AI Governance is Becoming a Growth Strategy, not a Constraint.
  • McKinsey (2021). Insurance 2030: The Impact of AI on Risk and Coverage