Geopolitical and technological implications
At least two significant geopolitical factors have driven China’s accelerated AI innovation:
- Restricted access to Western AI models: As of January 2025, OpenAI’s GPT models and Meta’s Llama models are either blocked or officially unavailable in China due to government regulations and provider decisions. This restriction has spurred Chinese companies to develop their own cutting-edge models.
- Hardware constraints: US export restrictions limit the availability of high-performance Nvidia Graphical Processing Units (GPUs) in China. GPUs are the primary computing hardware used to train models. Despite these constraints, DeepSeek reportedly trained its V3 model – comparable in performance to GPT-4 – on only 5% of the GPUs OpenAI used. This demonstrates the efficiency of Chinese AI training methodologies, which could enable more cost-effective model training and fine-tuning with fewer computational resources.
Open vs. closed models: A crucial distinction for insurance professionals
While these developments have raised considerable concern among Western tech companies, Yann LeCun, Meta’s Chief AI Scientist, recently Tweeted: "To people who think China is surpassing the US in AI, the correct thought is open-source models are surpassing closed ones." This distinction is particularly important for the insurance industry, where transparency, interpretability, and cost-efficiency are key factors in AI adoption.
Closed models
Closed LLMs – such as OpenAI’s GPT models, Google’s Gemini, Anthropic’s Claude, and AWS’s Titan – keep their training data, architecture, and weights proprietary. They are only accessible via APIs, meaning insurance companies cannot audit the inner workings of these models, raising concerns about data privacy, compliance, and explainability.
Open models
Open-source models – including Meta’s Llama, Mistral, and DeepSeek’s offerings – provide access to the trained weights, allowing companies to deploy and fine-tune models on their own infrastructure. This transparency enhances compliance, particularly in regulated industries such as insurance.
Applications in life insurance: Fine-tuning for cost efficiency
For life insurers, extracting insights from unstructured medical data is a crucial challenge. Underwriting processes often rely on vast amounts of medical history, physician notes, and diagnostic reports. While general-purpose LLMs can perform well in these tasks, fine-tuning a domain-specific model can be significantly more cost-effective than relying on expensive API-based closed models.
Example use cases
- Medical document analysis: Fine-tuning an open model on historical underwriting data can enhance accuracy in extracting conditions, treatments, and risk factors from unstructured medical records.
- Fraud detection: Training a model on anonymized claims data can help detect patterns indicative of fraudulent activity, reducing costs associated with false claims.
- Customer service automation: Deploying a customized chatbot to assist policyholders with policy inquiries and claims processes can improve response time and customer satisfaction. Such a bot would require considerable fine-tuning to match tone and the extreme need for accuracy.
With open models, life insurers can fine-tune AI to their specific underwriting needs while ensuring compliance with industry regulations such as HIPAA and GDPR. Advances made training V3 with 5% of the normal number of GPUs normally used will make fine-tuning and training more affordable for all companies. Running models in a secure environment allows for full data control, unlike using third-party APIs, where data handling is dictated by external providers.
How will your data be used?
One of the most critical considerations for insurance companies adopting AI models is data security. Running an open model on-premises or with a trusted cloud provider (AWS, GCP, Azure) ensures complete control over sensitive policyholder data. In contrast, using an external API requires scrutiny of the provider’s terms of service.
Best practices for ensuring data privacy
- Deploy models in-house or on trusted cloud infrastructure: Avoid sending sensitive underwriting or claims data to third-party APIs whenever possible.
- Review legal agreements for API usage: Regardless of whether your model is open or closed, many API providers collect usage data for retraining and optimization. Ensure contracts specify that proprietary data will not be used for future model improvements.
- Implement access controls and auditing: Restrict access to AI-generated outputs to authorized personnel and maintain audit logs for compliance purposes.
Conclusion: The future of open AI in insurance
The rise of open AI models presents a transformative opportunity for life insurers. By embracing open-source LLMs, insurers can reduce costs, enhance accuracy, and maintain control over sensitive data. DeepSeek’s advances illustrate that cutting-edge AI might no longer be confined to closed ecosystems and that open innovation is rapidly leveling the playing field.
As insurers navigate the evolving AI landscape, the strategic adoption of open models will empower them to harness the full potential of generative AI while safeguarding data privacy, improving operational efficiency, and driving business growth. The future of insurance AI is open, and forward-thinking companies will be the ones to capitalize on this shift.
Learn more: Partner with RGA to responsibly combine advanced technology like artificial intelligence, data, and expertise to deliver value to your customers.