Data Solutions
  • Articles
  • December 2024

Defining Artificial Intelligence in Insurance: The challenge and the opportunity

A paper cutout of a person held over a laptop
In Brief
RGA’s Jeff Heaton, recently returned from a US Department of the Treasury roundtable on “AI in the Insurance Sector,” explains why the future of this technology in insurance starts with defining what “artificial intelligence” means within our industry. Have a question or comment for Jeff? Join the conversation.

Key takeaways

  • AI in insurance is rapidly evolving, and a workable definition is necessary to guide further regulation.
  • It is important for insurers to not only be mindful of the global regulatory environment but also develop their own AI definition to determine which processes should fall under the additional governance AI requires.
  • By collaborating and considering all aspects of this technological revolution, insurers can take positive strides in defining not only AI but also the future of insurance.

 

On September 24, 2024, the Federal Insurance Office of the US Department of the Treasury held a roundtable in Washington, DC on “AI in the Insurance Sector” and invited insurance industry leaders, regulators, and groups to participate. I represented RGA at this important discussion and came away even more cognizant of the challenges facing industry regulators on this issue but also hopeful that, by working together, we can arrive at a definition of AI and associated rules that will serve the industry’s best interests.

Woman in red dress reivewing tablet with a man
Learn how you can put RGA’s data and technology experience and insights to work for you.

Technical definitions of AI

AI emerged from mathematical and technological roots in academia, and early definitions reflect this foundation. For example, John McCarthy, who coined the term AI in 1955, defined it as "the science and engineering of making intelligent machines.” Merriam Webster further elaborated on this to offer “the capability of computer systems or algorithms to imitate intelligent human behavior.”

These early definitions emphasized the term “human intelligence” as most computer programs are inherently non-intelligent. Programs are just a set of steps that a computer rigidly, and non-intelligently, follows, such as an underwriting system that quickly identifies cases to accept or decline based on predefined rules.

However, when seemingly unintelligent programs become complex and can learn from and adapt to data, AI-like systems result. This ability to learn is one of the core pillars for defining AI.

A broad definition of AI may therefore include many actuarial models that have existed for decades. Meanwhile, systems that learn to underwrite by training on many thousands of applications underwritten by humans might more accurately be classified as AI.

Modern technical definitions go much further. For example, The National Institute of Standards and Technology (NIST) defines AI as:

  1. A branch of computer science devoted to developing data processing systems that perform functions normally associated with human intelligence, such as reasoning, learning, and self-improvement.
  2. The capability of a device to perform functions that are normally associated with human intelligence, such as reasoning, learning, and self-improvement.

These definitions incorporate additional capabilities traditionally considered exclusively human characteristics – reasoning and self-improvement – that allow the machines to perform beyond following hard-coded rules to learn from data much like humans learn from their environment.

Defining “reasoning” in this context will likely present a source of debate for years to come. Human-like reasoning was introduced with GPT o1 Strawberry and is a hot research topic in the field of AI. Unfortunately, without concrete definitions of how humans experience concepts such as reasoning, free will, and consciousness, it is even more difficult to define when a machine reaches one of these thresholds. In fact, not all philosophers and scientists agree that humans themselves have achieved these states of being.

How should insurers define AI?

When regulators take on the task of defining AI, they approach it from a different perspective than their technical counterparts:

  • Focus: Technical definitions emphasize the capabilities and technologies behind AI, such as machine learning and neural networks. In contrast, regulatory definitions focus on the societal impact and governance of AI systems.
  • Scope: Technical definitions often include specific technical traits and methodologies used in AI development. Regulatory definitions are broader, encompassing the potential risks and ethical considerations associated with AI deployment.
  • Purpose: For technical experts, the purpose of defining AI is to advance understanding and development of the technology. For regulators, the purpose is to create frameworks that ensure safe and ethical use of AI technologies.
Insurers cannot rely on a one-size-fits-all regulatory definition. Even within a single country, varying definitions of AI make it impractical for an insurer to simply adopt one. If the insurer is a global entity, the number of regulatory definitions only grows.

It is important for insurers to not only be mindful of the global regulatory environment but also develop their own AI definition to determine which processes should fall under the additional governance AI requires.

Fortunately, most insurers have a long track record for governing systems and models that influence business decisions, including actuarial models for pricing, valuation, asset-liability management, and other forms of risk assessment. As with these models, insurers should consider the following aspects of governance when defining AI:

  • Materiality – Would failure of the model result in significant financial or reputational loss to the insurer?
  • Human in the loop – Are the decisions of the AI subject to human review and approval, and to what degree is the model autonomous?
  • Legal ramifications – Are there legal ramifications from the decision of the model?
  • Learning from data – Was the logic of the model designed by an actuary, or was it learned by the computer observing large amounts of data?
  • Human impact of decisions – Will the decisions of the model affect decisions that are important to human customers?

It is also important to include the technical definitions. If we only consider the governance criteria above, actuarial models used for decades could be deemed AI systems, subjecting them to additional – and unnecessary – oversight.

For example, consider a simple program that enforces the US National Minimum Drinking Age Act of 1984:

IF age>=21 THEN serve alcohol ELSE refuse to serve alcohol beverage

This program makes a material decision that is important to a human client with definite legal ramifications. Additionally, this rule could be learned from observing data from bars and nightclubs. However, this program is not AI, as most technical definitions would clarify – it is a simple rule, such as those employed by many automated underwriting systems.

AI in insurance is rapidly evolving, and a workable definition is necessary to guide further regulation. I was proud to join industry colleagues from around the country to help regulators take on this challenge, and I look forward to continuing this effort. By collaborating and considering all aspects of this technological revolution, we can take positive strides in defining not only AI but also the future of insurance.


More Like This...

Meet the Authors & Experts

JEFF HEATON
Author
Jeff Heaton
Vice President, Data Science, Data Strategy and Infrastructure