Your Secure and Responsible Technology Partner

Is your organisation insured for AI risks?

Last week, the MIT AI Risk initiative (AIRI) released the latest version of its AI Risk Navigator. It paints the AI Risk landscape across 7 domains and 24 sub-domains, covering their full scope of AI risks, from discrimination and toxicity to AI system safety and socioeconomic disruption. [1]

If you review the list of incidents, you will note the majority of them are related to LLMs, Generative AI and derivative Agentic AI deployments. The nature, severity and likelihood of risks associated with Generative AI use are likely to be different to those associated with Agentic AI deployment, even though LLMs are at the heart of these tools.nbsp;

As organisations continue to adopt these technologies, are their leaders, boards, employees and customers actually aware of the range of associated AI risks? If so, how have they been mitigated and managed?

Insurance

Almost every business transfers its residual risks to insurers. However, insurers will only take on those risks if they can be quantitatively assessed.

This whitepaper by MunichRe in 2024 [2] acknowledges risks associated with Generative AI, which cover the following types:

  • Hallucination and false information (generation of false information or misleading content);
  • Bias and fairness (generating unfair or biased output resulting in the discrimination of a protected class);
  • Privacy infringement (leaking of private or sensitive information);
  • Intellectual property violations (generation of content that is trained on IP-protected materials without permission; or generation of content that mimics licensed material);
  • Harmful content (generation of offensive or malicious content, illegal materials); and,
  • Other risks and environmental risks (increasing the number of parameters increases the need for training/retraining the models, which increases the energy consumption)

“However, even the most accurate AI will produce wrong or misleading results from time to time, regardless of how extensive its technical improvements are.” [2]

It also outlines the necessary risk management considerations for GenAI use cases for decision makers.

While there is recognition in the Whitepaper that risks associated with machine learning models, especially those developed in-house and that were subjected to model risk management best practices, are insurable, it cites “measuring the performance of GenAI models is less straightforward.”

“Reasons are the differences in training (leading to more complex outcomes), the variety of tasks GenAI models are confronted with, differences in setup (most GenAI models are based on foundation models), and finally the subjectivity of the quality of these outcomes (judging the ground truth). Due to these difficulties, the underperformance of a GenAI application must be defined well in order to capture the essential differences instead of granular variations for a risk transfer to be meaningful.” [2]

If performance thresholds are key criteria for determining whether risks associated with Generative AI can be underwritten and insured, insurers will need to continuously monitor these thresholds and update insurance policies regularly. Imagine the specific conditions that could be stipulated in insurance policies for coverage.

The challenge for the insured is the stochastic and non-deterministic nature of LLMs at the heart of Generative AI tools. Operating within specific performance thresholds using technology infrastructure that the insured does not fully control and that is non-deterministic and unreliable in outputs is like being in a house built on shifting ground; the insured will still be dependent on their technology providers to ensure that their LLMs perform within these thresholds.

Fast forward to April 2026, when, at the height of the Agentic AI hype, it was reported in the US that many of its States have approved requests from the largest insurers to exclude AI from liability policies [10].

“While there’s no standard response to customer use of AI in the insurance market, many carriers are now quietly declining to write policies for claims related to AI-generated outputs in cybersecurity and errors and omissions (E&O) coverage, these observers say. Other insurance carriers are jacking up prices to cover AI-related claims.” [3]

For LLM-based Agentic AI systems, the concept of ‘Runtime Behavioural Drift’ is introduced in this paper titled ‘AI Agents Under EU Law’ [11]. It outlines three mechanisms that cause behavioural drifts in AI agents.

“If a provider cannot demonstrate that the system’s behavior remains within the boundaries assessed during conformity assessment, and cannot detect when it deviates, then the essential requirements on human oversight (Article 14), accuracy (Article 15), robustness (Article 15), logging (Article 12), and post-market monitoring (Article 72) are not met as required by Article 43.” [11]

The exclusion of AI-generated outputs from insurance policies should also be concerning for organisations adopting vibe coding tools, where security vulnerabilities have been found, particularly when there was insufficient and robust expert review of outputs before deployment. In this Assessed Intelligence blog [4], @scarpinojosh dissects the Lovable data leak incident and details the structural design flaws in language model-driven vibe coding tools:

“Lovable is not uniquely negligent. It is representative of a broader category. Veracode’s 2025 GenAI Code Security Report tested more than 100 large language models across 80 coding tasks and found that AI-generated code contained security vulnerabilities in 45 per cent of cases.”

Dr Scarpino then outlines how Assessed Intelligence’s ARISE Framework can be leveraged to address each of the control failures reported in the Lovable incident across all 7 pillars. He concludes by proposing 3 actions organisations engaging in vibe coding should take this quarter. Your organisation could benefit from undertaking Assessed Intelligence’s Maturity Assessment to provide a baseline of its AI and Cybersecurity risk posture.

As for risks associated with Agentic AI, I discussed them in my last article [6]. Considering some of the outlined criteria for Generative AI insurability [2], I expect the risk and impact assessment processes associated with the deployment of Agentic AI to be complex and challenging.

In my recent articles, I have been raising awareness of security vulnerabilities [8] related to transformer-based language models (e.g. LLMs), which are at the heart of what the industry calls “AI”. So, when we establish their presence in deployed technology solutions, cybersecurity risk effectively becomes a subset of AI risk, as we see in the AI Risk Navigator [1]

The CISO Report

This brings us to the 2026 CISO AI Risk Report by Saviynt [7], which reveals interesting, but not surprising, findings from their survey of more than 200 CISOs and security leaders.

Given the nature and security vulnerabilities of LLMs [8], security considerations and controls must be on the critical path of any decision to adopt Generative AI and Agentic AI tools.

When technology providers position Generative AI and Agentic AI tools and make them accessible to enable operational efficiencies and cost savings, resulting in organisation leaders rushing to adopt and deploy them without sufficient due diligence, the findings below are not surprising:

Article content
Source: 2026 CISO AI Risk Report by Saviynt [7]

LLMs, Generative AI and Agentic AI tools have been primarily introduced into organisations through three main avenues:

  1. Board and Top management mandates
  2. Incumbent technology providers (SaaS)
  3. Shadow AI

When we consider the ease of access and the convenience factor associated with these tools, their accelerated adoption ahead of sufficient controls being introduced and implemented to mitigate associated risks has resulted in the gaps and challenges we see in organisations today.

In cases where organisations have rushed to adopt and deploy Agentic AI capabilities made available by their technology providers (where AI agents are delegated access and authority to act) without understanding the limitations and constraints within their architecture, and allowing for the CISO function and their security teams to design and implement the appropriate controls at the outset, the findings below are expected:

Article content
Source: 2026 CISO AI Risk Report by Saviynt [7]

In Summary

We have seen signals from US insurers and sentiments expressed by CISOs and security leaders based on their assessments of AI risks. Meanwhile, AI and related security incidents relating to LLM-based Generative AI and Agentic AI continue to increase despite AI Governance programmes being rolled out.

However, organisation leaders are continuing to champion the adoption and deployment of these emerging technologies. What does that tell us?

Insurance is one instrument businesses use to transfer their residual risks. The decision by major US insurers to exclude AI-generated output from their policies is a clear indicator of their current assessment of the potential impacts and the (in)ability of organisations adopting and/or deploying these emerging technologies to manage and mitigate those risks in a way that is acceptable for underwriting.

Fundamentally, I sense that the major US insurers understand the unreliability of these emerging technologies in high-risk and high-trust environments. [8][13]

Will other insurers in other markets follow suit, or are they willing to take on those risks only if organisations can demonstrate that they can adopt and deploy technologies that consistently produce outcomes that are reliable, ethical, safe, secure, and trusted?

Actions to consider

Organisation leaders should review their existing (traditional) insurance policies to establish if partial coverage (Silent AI cover potentially through Professional Indemnity, Business Interruption, D&O, E&O, Crime, Product Liability and Employers Liability insurance policies) exists, along with any associated policy limits and exclusions. Relying on Silent AI cover is risky since specific AI incidents may not be covered, and the insured will not find out until the incident occurs. [12]

“The risk appetite is changing among the carriers, and it’s always constantly evolving.. With regard to AI, there are carriers that are just removing it from their risk appetite and declining to quote altogether.” – Jason Bishara, financial practice leader at global carrier NSI Insurance Group. [3]

In any case, organisation leaders adopting these emerging technologies should return to the fundamentals [9], invest in and establish sound foundational capabilities, and adopt responsible innovation practices to enable them to:

  • Identify, assess, manage, continuously monitor and mitigate risks through operationalised controls that can be continuously assured to be effective;
  • Comply with all relevant regulatory obligations;
  • Establish effective governance, oversight and accountability for all outcomes from the deployment of these technologies; and
  • Be prepared to submit all components of their (organisation) System for an independent audit.

Insurers will need to be satisfied that the residual risks related to the use of LLM-based Generative AI and Agentic AI that they are taking on are sufficiently managed within the organisations they are insuring. Otherwise, those risks in uninsured organisations are left for their leadership and their value chain to bear.

References:

[1] https://www.airi-navigator.com/

[2] https://www.munichre.com/content/dam/munichre/contentlounge/website-pieces/documents/MR_AI-Whitepaper-Insuring-Generative-AI.pdf/_jcr_content/renditions/original./MR_AI-Whitepaper-Insuring-Generative-AI.pdf

[3] https://www.cio.com/article/4159297/insurance-carriers-quietly-back-away-from-covering-ai-outputs-2.html

[4] https://assessedintelligence.com/when-by-design-is-the-casue/

[5] https://ariseframework.com/

[6] https://www.linkedin.com/pulse/our-systems-ready-agentic-ai-chris-leong-fhca-f9s1e

[7] https://www.cybersecurity-insiders.com/wp-content/uploads/2026-AI-Identity-Risk-Report-Saviynt-by-CSI-1.6.pdf

[8] https://arxiv.org/abs/2511.05102

[9] https://www.linkedin.com/pulse/return-fundamentals-chris-leong-fhca-kjjie

[10] https://www.theinformation.com/articles/berkshire-hathaway-chubb-win-approval-drop-ai-insurance-coverage

[11] https://arxiv.org/pdf/2604.04604

[12] https://www.corporatecomplianceinsights.com/ai-insurance-getting-hard-part/

[13] https://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html

Also published on LinedIn:https://www.linkedin.com/pulse/your-organisation-insured-ai-risks-chris-leong-fhca-d1vye/?trackingId=NjIvSRRVTuCcElA%2FqKyPDA%3D%3D