Data Solutions
  • Articles
  • April 2025

GenAI in Insurance Update: Q2 2025

Woman presenting on AI technology
In Brief

Rapid advancements in generative AI, quantum computing, and AI copyright are poised to revolutionize underwriting, fraud detection, and policy administration, challenging insurers to carefully navigate the balance between innovation and regulatory compliance.

Key takeaways

  • Advanced large language models are transforming risk assessment and customer engagement, enabling sophisticated underwriting and personalized policy recommendations.
  • Breakthroughs in quantum computing promise to enhance risk assessment and fraud detection, potentially uncovering patterns invisible to classical computers.
  • US Copyright Office findings on AI-generated content offer a warning for careful implementation of AI tools to maintain legal compliance.

This quarter’s AI update explores key developments in these areas. Additionally, it examines the implications of open vs. closed AI models and how they influence the future of underwriting and customer engagement.

Foundation model update

Even the largest insurance companies will not train an entire LLM from scratch for the foreseeable future. Rather, they will rely on foundation models developed by large tech companies. The LLM-makers have been busy this past quarter, releasing multiple advances in LLM technology, which have been marked by significant improvements in dynamic reasoning and task-specific adaptability.

  • Anthropic’s Claude 3.7 now offers a “hybrid reasoning” mode that dynamically shifts between rapid, instinctive replies and a more deliberate, extended thinking process. It even includes an integrated scratchpad that exposes its reasoning steps. This innovation enhances performance in creative and conversational tasks while also improving coding and logical problem-solving abilities.
  • Elon Musk’s xAI has expanded its offerings with Grok 3, which leverages an order-of-magnitude increase in computing power to excel in benchmarks for math, science, and coding tasks. This underscores the industry’s shift toward models that can scale reasoning depth on demand.

Chinese LLM innovations continue to push the boundaries of efficiency and multimodal functionality.

  • DeepSeek’s recent releases have demonstrated competitive performance in mathematical reasoning and coding, delivering high-quality results while significantly reducing training costs.
  • Moonshot AI’s KIMI k1.5 has emerged as a strong contender, with robust performance in mathematics, coding, and multimedia comprehension, while maintaining resource efficiency.
  • Alibaba’s Qwen 2.5-vl extends traditional language models by integrating visual processing with extended context windows, enabling seamless handling of both text and visual inputs for complex, real-world applications.
Red lock
RGA is unlocking the future of underwriting through data analytics and technology.

 

Open vs. closed models: The DeepSeek and Llama perspective

One of the ongoing debates in AI development is the tradeoff between open and closed models.

  • Open-source models, such as Meta’s Llama 3 and DeepSeek, provide transparency, flexibility, and cost advantages, allowing insurers to fine-tune models for proprietary underwriting and risk assessment use cases. These models enable greater adaptability in integrating external datasets, which is crucial for insurers looking to optimize AI-driven risk predictions. However, they also pose security, consistency, and regulatory compliance challenges.
  • Closed models, such as OpenAI’s GPT-4 or Anthropic’s Claude 3.7, offer robust safety guardrails, enterprise-grade support, and higher reliability in maintaining compliance with evolving insurance regulations. These models are optimized for accuracy and require less in-house expertise to manage.

Insurers must carefully weigh these considerations when selecting AI solutions, balancing the need for customization with the operational risks associated with model governance.

Copyright and AI: Defining authorship 

A recent report by the US Copyright Office explores the evolving intersection of AI and copyright law, focusing on the copyrightability of AI-generated content. As the second part of an ongoing series, the report builds upon prior discussions on digital replicas and sets the stage for future considerations around AI training data and liability.

The study reaffirms that US copyright law requires human authorship, rejecting claims for AI-generated works without substantial human involvement. It acknowledges, however, that AI can be used as a tool in the creative process, provided that a human contributes sufficiently to the expression and arrangement of the final work. Notably, the report highlights international approaches to similar legal questions, emphasizing the global relevance of these discussions.

The findings indicate broad consensus among stakeholders that material created entirely by AI does not qualify for copyright protection, while human involvement – such as detailed prompting, expressive input, or post-generation modifications – may warrant case-by-case evaluation.

The document also raises policy considerations, debating whether new legal frameworks are necessary to encourage innovation while protecting human creators.

While the report concludes that current law is flexible enough to address these challenges without legislative change, ongoing monitoring of technological advancements remains a priority. This framework may serve as a reference for insurers looking to navigate the legal landscape of AI-generated content in policy documentation and claims processing.

Quantum computing breakthroughs: Microsoft and Google push the frontier

Recent breakthroughs in quantum computing by Microsoft and Google mark significant milestones in the journey toward practical quantum advantage.

Microsoft’s Majorana 1 introduces topological qubits, which promise greater stability and lower error rates, potentially paving the way for scalable quantum systems. Meanwhile, Google’s Willow processor demonstrates quantum error correction capabilities that could exponentially accelerate computations beyond classical limits.

These advances push quantum computing closer to real-world applications and signal a future that revolutionizes complex problem-solving, from materials science to artificial intelligence.

For the life insurance industry, quantum computing presents game-changing potential in risk assessment, portfolio optimization, and fraud detection. Quantum algorithms could vastly improve the accuracy of actuarial models by processing vast amounts of medical and demographic data in real time, uncovering patterns that classical computers struggle to detect.

Additionally, these breakthroughs intersect with generative AI, where quantum-enhanced machine learning could lead to more precise underwriting models and personalized policy recommendations.

As AI-driven automation continues to evolve in insurance, quantum computing may become the next frontier in accelerating AI model training and improving decision-making for risk management. While these technologies are still emerging, their convergence suggests a future where insurers can provide more adaptive, data-driven solutions with unparalleled efficiency.

Precompiled tokens and AI context management

Recent advancements in models such as Claude have introduced the capability to precompile tokens and maintain a persistent context, allowing them to store large volumes of information in a compressed format. This means that instead of relying solely on retrieval-augmented generation (RAG), which dynamically fetches and processes a company’s relevant proprietary information at query time, the model can have entire documents or even extensive collections of written text loaded directly into its context buffer.

By doing so, the model can answer queries based on a vast amount of preloaded information without incurring the repeated costs of external retrieval or re-embedding the data every time. This approach can streamline the inference process, reducing latency and computational overhead while preserving the richness of the underlying knowledge base.

Precompiled tokens and RAG will both be important tools for any insurance application where large documents, such as underwriting manuals, need to be incorporated.


 

Meet the Authors & Experts

JEFF HEATON
Author
Jeff Heaton
Vice President, AI Innovation