Summary: AI is now embedded across industries, transforming enterprise operations, products, and services. From HRTech and legal applications to co-pilots and chatbots, AI is reshaping risk profiles in ways traditional insurance hasn’t fully addressed. As compliance and litigation risks grow, brokers and risk managers play a pivotal role in identifying AI exposures, identifying coverage gaps, and ensuring the right protection is in place.
AI is more than a headline—it's becoming integral to business operations. Organizations of all sizes are embedding AI into workflow, with its contribution to the global economy projected to reach $19.9 trillion over the next five years.
As AI becomes mission-critical to all businesses, the risk of failures grows. For brokers and risk managers, understanding where AI is embedded and how it could fail is essential to safeguarding enterprise resilience.
Today’s organizations run on AI - from Big Tech all the way to McDonald’s. Understanding where and how AI is used, and the type of AI application, is key to assessing risk exposure and ensuring appropriate coverage.
(Source: McKinsey State of AI Report 2024)
With classical, generative and agentic AI solutions spanning both industry-specific and enterprise-wide functions, brokers and risk managers need to evaluate not only where AI is being used but also where it might fail—and how that failure could impact operations, liability, and coverage.
AI isn’t like other software. It’s probabilistic, not deterministic, and performance is expected to improve, vary, or degrade depending on the conditions. Model errors or hallucinations are not a bug–or a form of negligence–but a feature that can affect entire business systems, exposing enterprises to serious financial and legal consequences.
Here are examples of real-world AI failures with direct liability implications:
These failures aren’t anomalies—they’re inherent risks in how AI models function. As AI adoption deepens, systemic failures could have larger, more complex impacts, particularly in industries where accuracy and compliance are critical.
Despite AI’s deep integration into business operations, insurance coverage has not kept pace. Many assume Cyber, Tech E&O, and other liability policies will automatically cover AI failures—but this assumption could be costly. And while some traditional carriers have introduced endorsements for AI-driven or conventional cyber attacks on AI systems, significant coverage gaps and uncertainties remain the elephant in the rooms.
Here’s what we’re hearing from brokers:
Businesses may be unknowingly self-insuring against AI risks. For brokers and risk managers, understanding where these gaps exist is critical to ensuring clients are adequately protected.
As AI regulations come into force, AI litigation escalates and enforcement becomes more common, most enterprise risk and compliance teams have begun taking an inventory of all 1st and 3rd party models currently in production. Here are some preliminary questions to help uncover hidden risks and identify potential coverage gaps:
AI risk is already reshaping the business landscape. At Armilla AI, we specialize in affirmative insurance solutions for AI performance and liability, backed by Lloyd’s of London, Swiss Re, and top brokerages. Whether you’re evaluating AI risk, assessing coverage issues or looking for clear, AI-specific protection, we’re ready to support you in strengthening your approach to AI risk management and coverage.
📩 Contact us to learn how Armilla AI helps enterprises assess, quantify, and insure AI risk in partnership with top brokerages.