AI Risk Disrupts Insurance Coverage

March 12, 2025
5 min read

Summary: AI is now embedded across industries, transforming enterprise operations, products, and services. From HRTech and legal applications to co-pilots and chatbots, AI is reshaping risk profiles in ways traditional insurance hasn’t fully addressed. As compliance and litigation risks grow, brokers and risk managers play a pivotal role in identifying AI exposures, identifying coverage gaps, and ensuring the right protection is in place.

AI Adoption: Every Business is a Tech Business

AI is more than a headline—it's becoming integral to business operations. Organizations of all sizes are embedding AI into workflow, with its contribution to the global economy projected to reach $19.9 trillion over the next five years. 

  • 98% of small businesses now use AI-powered tools (U.S. Chamber of Commerce).
  • 72% of large enterprises have fully integrated AI into operations (McKinsey).
  • By 2025, enterprises will prioritize AI agents—autonomous systems handling complex decision-making (Gartner).

As AI becomes mission-critical to all businesses, the risk of failures grows. For brokers and risk managers, understanding where AI is embedded and how it could fail is essential to safeguarding enterprise resilience.

Advent of the AI-Powered Enterprise: Understanding Where Risk is Embedded

Today’s organizations run on AI - from Big Tech all the way to McDonald’s. Understanding where and how AI is used, and the type of AI application, is key to assessing risk exposure and ensuring appropriate coverage.

Industry-Specific AI Applications

Enterprise-Wide AI Functions

(Source: McKinsey State of AI Report 2024)

With classical, generative and agentic AI solutions spanning both industry-specific and enterprise-wide functions, brokers and risk managers need to evaluate not only where AI is being used but also where it might fail—and how that failure could impact operations, liability, and coverage.

When AI Fails: The Problem of Model Errors & Systemic Risk

AI isn’t like other software. It’s probabilistic, not deterministic, and performance is expected to improve, vary, or degrade depending on the conditions. Model errors or hallucinations are not a bug–or a form of negligence–but a feature that can affect entire business systems, exposing enterprises to serious financial and legal consequences.

Here are examples of real-world AI failures with direct liability implications:

  • Banking → AI wrongly flags transactions as fraudulent, causing account freezes, financial losses, and lawsuits.
  • Insurance → AI errors in claims processing deny valid claims or miscalculate payouts, triggering disputes and regulatory action.
  • Customer Service → AI chatbots hallucinate, misquoting refund policies or contract terms, resulting in customer lawsuits.
  • Lending & Underwriting → AI incorrectly classifies risk, causing mispriced policies or loan denials, leading to compliance issues.
  • Healthcare → AI misclassifies billing codes, delaying reimbursements and causing penalties.
  • HR & Payroll → AI miscalculates wages or taxes, triggering employment disputes.

These failures aren’t anomalies—they’re inherent risks in how AI models function. As AI adoption deepens, systemic failures could have larger, more complex impacts, particularly in industries where accuracy and compliance are critical.

AI Coverage Issues: Gaps & Uncertainty in Today’s Policies

Despite AI’s deep integration into business operations, insurance coverage has not kept pace. Many assume Cyber, Tech E&O, and other liability policies will automatically cover AI failures—but this assumption could be costly. And while some traditional carriers have introduced endorsements for AI-driven or conventional cyber attacks on AI systems, significant coverage gaps and uncertainties remain the elephant in the rooms. 

Here’s what we’re hearing from brokers:

  • Exclusions → Some insurers, like Philadelphia Insurance and Hamilton Select, have already excluded AI-related claims from E&O policies. Industry reports suggest other major carriers have drafted AI exclusions that are "ready to be deployed."
  • Coverage Gaps Policies vary widely in how they address AI risks, especially for non-breach data privacy and security events, or financial harm caused by model errors. Customizing foundation models, LLMs and other third party AI models can void certain Tech E&O policies—a growing issue, as enterprises increasingly build and modify AI solutions internally.
  • Uncertainty Silent coverage across multiple policies (CGL, D&O) creates grey areas. Insurers may shift liability via referral clauses, leading to delays, disputes, and potential non-coverage for AI-related losses.

Businesses may be unknowingly self-insuring against AI risks. For brokers and risk managers, understanding where these gaps exist is critical to ensuring clients are adequately protected.

AI Risk Managers Playbook: How to Uncover AI Risk

As AI regulations come into force, AI litigation escalates and enforcement becomes more common, most enterprise risk and compliance teams have begun taking an inventory of all 1st and 3rd party models currently in production. Here are some preliminary questions to help uncover hidden risks and identify potential coverage gaps:

  • How is AI usage being tracked? Is there an inventory of AI-powered tools, including first- and third-party solutions?
  • Where is AI deployed? Which business units or customer-facing services rely on AI, and how are these tools integrated into operations?
  • Are third-party foundation models in use? Are applications built on models from OpenAI, Anthropic, Google, or Microsoft, and how is risk managed for these integrations?
  • Is the company customizing AI models? Are they training proprietary LLMs or modifying off-the-shelf models, and how is that risk being assessed?
  • How is AI risk being evaluated or quantified? Have there been formal risk assessments for critical AI applications?
  • What is the desired coverage approach? Would affirmative AI coverage better protect high-exposure AI deployments?

Conclusion: Navigating AI Risk and Coverage Together

AI risk is already reshaping the business landscape. At Armilla AI, we specialize in affirmative insurance solutions for AI performance and liability, backed by Lloyd’s of London, Swiss Re, and top brokerages. Whether you’re evaluating AI risk, assessing coverage issues or looking for clear, AI-specific protection, we’re ready to support you in strengthening your approach to AI risk management and coverage. 

📩 Contact us to learn how Armilla AI helps enterprises assess, quantify, and insure AI risk in partnership with top brokerages.

Share this post

Ready to Insure Your AI?

Armilla’s Affirmative AI Coverage is your fail-safe against fast-evolving AI risks. We combine deep technological insight with robust insurance solutions so you can focus on innovation, without interruption.
Get in touch