How ethical design shapes trust, consistency, and long-term adoption of AI.
I was fortunate enough to be able to participate on a panel at ITI Europe 2026 discussing a topic that is so important and often overlooked: Ethics in AI.
This article is intended to document some of the main points we covered in our discussion.
I want to thank the impressive panel participants:
The diverse backgrounds of this group provided a compelling and insightful discussion around the entire Ethics in AI topic.
Our moderator, Paolo Cuomo, was able to help us cover a huge amount of important information within our allotted 40 minutes.
It was a pleasure to work with this group!
— Ben Bomhoff, Head of Insurance Solutions at Lazarus AI
Over 70% of insurers are already deploying AI in at least one business function, yet more than 60% cite governance, risk, and trust as their primary barriers to scaling it (McKinsey, 2024; Deloitte, 2024). At the same time, the global AI in insurance market is projected to exceed $150B by 2034, signaling rapid adoption without equally mature oversight (Fortune Business Insights, 2026).
Early conversations around AI were dominated by capability: what it could automate, accelerate, and optimize. The focus was on performance, often with less attention paid to how those outcomes were produced.
However, as AI systems move into core insurance workflows, the risks are becoming more visible. Inconsistent decisions, lack of transparency, and unintended bias are no longer edge cases, but operational realities that directly impact customers and business outcomes—hence the importance of ethical AI governance.
Ethical AI governance refers to the structures, processes, and standards used to ensure AI systems operate responsibly in practice. This includes defining acceptable outcomes, monitoring for bias and performance drift, ensuring transparency in decision-making, and establishing clear accountability across teams. It is not a single control, but an ongoing system of oversight that spans how AI is designed, deployed, and continuously evaluated.
Without clear governance, even small inconsistencies compound into financial, regulatory, reputational, and security risk. According to McKinsey’s State of AI report, nearly half of organizations using generative AI report experiencing at least one negative consequence, including inaccurate outputs, compliance issues, and data exposure events. These risks are no longer theoretical, driving home the need for investment in robust and ethical governance programs to avoid such lapses, speed remediation, and ensure business continuity. When it comes to deploying these types of technologies, ethics cannot be treated solely as a compliance requirement (i.e. a set of rules applied after a system is already built), but as the framework that determines whether AI systems can be trusted, scaled, and sustained.
This shift—from focusing on what AI can do to how it should behave—is what will ultimately determine whether these systems create lasting value or introduce risk at scale.
While compliance sets the baseline, ethics defines the standard.
AI systems can meet regulatory and compliance requirements and still produce outcomes that erode trust. For example, a pricing model may comply with regulations by excluding explicitly protected variables like race or gender, yet still generate discriminatory outcomes through proxy variables such as zip code, education level, or purchasing behavior. This has been observed in lending and insurance contexts, where compliant models still produced disparate impacts across demographic groups (Consumer Financial Protection Bureau; NAIC).
Treating compliance as the goal results in systems that are technically acceptable but operationally fragile. Treating ethics as the standard leads to different decisions:
This shift produces more stable systems, ones that perform not only within regulatory bounds, but also within the expectations of customers and stakeholders.
It is important to note that as AI is introduced at scale, patterns that were once diffuse may become consistent, repeatable, and harder to detect without intentional oversight. For this reason, ethics cannot be an afterthought. It must be embedded into how systems are designed and evaluated.
AI risk in insurance extends beyond ethics alone. It includes:
These risks are interconnected, and mitigation must be designed accordingly. Research from NIST’s AI Risk Management Framework emphasizes that AI systems require ongoing monitoring due to performance shifts over time. As such, this process is not a single control or checkpoint, but an ongoing practice that spans people, processes, and systems.
One emerging approach is the introduction of an intelligence layer between models and business workflows. Rather than relying on individual models to perform reliably in isolation, this layer structures how models are selected, how data is processed, and how outputs are generated and evaluated in an intentional, centralized, controlled, and auditable manner.
For example, systems like the Applied Intelligence Engine (AIE) demonstrate how this layer can be operationalized in practice. Designed for enterprise-grade applications, the AIE introduces a configurable layer that guides the selection of models and ingestion tools, governs model behavior, reduces hallucination risk and technical debt, and ensures outputs are explainable and grounded in verifiable data.
This architecture directly addresses several categories of risk:
Regardless of which approach is taken, risk mitigation practices must be holistic. From maintaining diversity in teams to ensuring that data validation, peer review, and model audits run alongside systems, these governance processes must cover the full lifecycle of AI. Without this alignment, mitigation becomes fragmented, ineffective and difficult to audit.
Effective AI governance is neither a purely top-down nor bottom-up approach.
At the top, boards and executive leadership must set direction. This starts with AI literacy: a working understanding of how these systems function, where they fail, and how those failures translate into business risk. Without this foundation, oversight becomes reactive rather than intentional.
Executives define:
At the bottom, technical and operational teams translate these principles into practice through model design, evaluation, and deployment decisions.
The connection between the two is critical. The World Economic Forum emphasizes that organizations that successfully scale AI governance embed accountability, transparency, and oversight directly into workflows, system design, and decision-making, rather than treating governance as a separate control layer. This requires both clear executive ownership and operational integration across the organization. A holistic, organization-wide strategy is key to influencing and sustaining a long-term roadmap.
When leadership sets expectations but teams lack guidance, execution drifts. Conversely, when teams act without alignment, risk compounds. Sustainable AI adoption requires both layers working in tandem.
In insurance, fairness is not theoretical—it directly affects who gets coverage, at what price, and under what conditions. If fairness is not defined early, inconsistency follows. And once scaled, inconsistency becomes systemic risk.
But when defined intentionally, fairness creates opportunity.
For example:
These approaches have the potential to close long-standing gaps in access to insurance, particularly for populations that were historically mispriced or excluded due to limited or biased data.
Regulators, including the NAIC, have emphasized that insurers must evaluate not just intent, but outcomes, ensuring that AI does not unintentionally reinforce disparities.
Fairness, when built into system design, becomes a driver of both equity and growth—expanding markets while improving trust.
AI is often framed as a transformational step forward in operations and growth. While that holds true, for the insurance industry, new capabilities without control introduces risk just as quickly as it creates value.
The organizations that succeed will not be the ones that solely adopt AI the fastest. They will be the ones that implement it with discipline and recognize the importance of a “human-in-the-loop” approach. This includes setting clear boundaries for use, focusing on well-defined problems, maintaining realistic expectations, and measuring outcomes (including fairness and accuracy) over time.
AI might feel like a shortcut, but it is not. It is a system that requires ongoing oversight and proper management.
Ethics is sometimes positioned as a limitation on what AI can do. In practice, it is what enables AI to move from experimentation to production, and from isolated use cases to sustained value. Incorporating intelligence layers into AI systems allows for both effective application and proper governance of this rapidly evolving technology.
The question is not whether ethics will shape AI adoption in insurance—it already is. What now garners attention is whether organizations will treat ethics as a constraint, or as the system that makes scaling at the enterprise level possible.