Artificial Intelligence (AI), including generative AI and now agentic AI/agents, is transforming professional services and financial markets. In insurance and reinsurance, AI is already embedded in risk assessment, fraud detection, underwriting, and portfolio optimisation. Reinsurers are increasingly deploying AI to evaluate ceded risks with greater efficiency and precision.
But alongside opportunity comes risk. The technology’s drawbacks are well-documented. Generative AI can produce authoritative-sounding but wholly inaccurate content. In the legal sector, confirmed instances now exist where AI has hallucinated case law or misinterpreted authorities.
These are not just academic curiosities and they underline the urgent need for vigilance in monitoring both AI tools and their outputs. For insurance and reinsurance markets – where accuracy, integrity, and trust underpin the system – the lessons are particularly relevant.
A recent judgment of the High Court of England and Wales, delivered in June 2025, brought the risks into sharp focus:
R (Frederick Ayinde) v London Borough of Haringey and Hamad Al-Haroun v Qatar National Bank QPSC [2025] EWHC 1383 (Admin).
Two referrals to the Divisional Court revealed stark failings:
- The first case: a barrister, F, cited five cases by name and citation. None existed. F also misstated legislation. When pressed, she could not produce the cases. Though denying direct AI use, she admitted sourcing material from websites and Google summaries – responses now known to be AI-generated. The Court rejected her explanations.
- The second case: a witness statement in support of an application included 18 case references. Many were fictitious; others were genuine but misquoted. The litigant accepted responsibility, having relied on generative AI and online sources. The solicitors had not verified the authorities, instead relying on their client’s research. The Court was clear: a lawyer cannot outsource professional responsibility for accuracy to a client. No sanctions followed, but the warning was emphatic.
Dame Victoria Sharp P, who presided, underlined that those using AI tools for legal research have a professional duty to verify accuracy before relying on them in practice. In an appendix to her judgment, she identified 17 further cases worldwide – from Australia, New Zealand, Canada, and the US – where similar misuse of generative AI was suspected. The issue is no longer isolated; it is global and growing.
The Court’s conclusion was unequivocal: freely available generative AI tools, built on large language models, cannot be relied upon for legal research. These systems generate fluent and plausible text, but their confident assertions may be wholly false. They may fabricate sources, misquote genuine ones, and misstate established law.
For the insurance and reinsurance sector, the message is clear. AI is powerful – but unchecked, it is also perilous. Rigorous oversight, professional scepticism, and verification of outputs are not optional. They are essential to safeguarding trust and integrity in markets where accuracy is everything.