This is a tooltip for the edit command button


We are finding ourselves in the brave new world. Artificial intelligence is here and permeating our lives more and more.  While involving issues of privacy, the use of AI also raises other issues, many of which impact insurance. One recent example of the challenges of AI is reports by Tom Hanks that ads for dental plans that appeared to feature him were deep fakes created by AI.[1]  This article focuses on some of those impacts.

Definition of AI
No perfect definition of AI exists. At its core, it is computers and software replicating the decision making process of humans. In a 2004 paper, John McCarthy provided a good working definition:

“It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”[2]

Computers are not humans, but like humans, by trial and error and practice they get better and better in doing the work. 

The goal of AI is to make it learnable to better address situations and questions that might arise. Benefits of AI are fast crunching and analysis of data, and accuracy once the AI program is “taught” how to process data and given appropriate rules. 

History of AI
AI is not new. The first super computers built in the 1950s had some mimic features, with the ability to play games of chess. For example, the movie, “2001:  A Space Odyssey,”[3] featured HAL9000, which was a supercomputer that played against an astronaut.

Back in the late 1980s, Lotus 123, the precursor to Excel, started to be used extensively by accountants. Some of those who had worked with humongous paper spreadsheets questioned the ability of the computer macros to arrive at numbers, and some asked the younger workers to check the computer’s work.

Alan Turing, about whom a movie was made addressing his technology and cypher coding during World War II, wrote a paper in 1950, “Computing Machinery and Intelligence.”[4]   In it, Turing posited that machines might be able to use available information and reasoning or logic to solve problems in a way similar to humans. Turing concludes, “Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” He then addresses numerous objections to his beliefs or views, not that computers can think, but that programming will result in them being able to make conclusions and come to results equal or better to humans.

One problem in the 1950s was that many computers did not store commands and results, so it could not learn or compare. That has changed in recent years.

In 1997, Gary Kasparov was beat handily in chess by Deep Blue.  Science fiction no longer controlled the narrative.

In 2011, AI won Jeopardy! But we should not be surprised - consider the development of the smartphone and the availability of information and data and processing power. Once a limit, with Moore’s Law working for many years, the processing power is substantially increased.

AI has been around and used in various circumstances for years, but the change has been the sophistication of the AI programs and proliferation of usages, including in insurance.

AI and Insurance
Insurers continue to be one of the biggest users of big data in everyday practice. Insurers have long used AI and algorithms and predictive models. In the early 2000s, when I was at an insurer that was entering the direct auto insurance market, for example, we discussed extensively the pricing points that one insurer’s black box used to price its insurance policies, with something like 20,000 possible pricing points.

In addition, insurers have used insurance scores, credit scores, and other pricing predictors employing big data for many years. Today, many insurance departments discuss the filing of complex algorithms for various lines of insurance. 

AI Insurance Regulation
To date, numerous states have introduced and passed a wide variety of laws that address AI (including on credit and insurance scoring.) To date, Colorado is the only state that has enacted specific  legislation. Colo. Rev. Stat. § 10-3-1104.9.[5] regulates insurers' use of external consumer data and information sources, as well as algorithms, to protect consumers from unfair discrimination in insurance rate-setting mechanisms.

In 2021, Colorado passed SB21-169 – Protecting Consumers from Unfair Discrimination in Insurance Practices, which requires insurers to  test their big data systems - including external consumer data and information sources, algorithms, and predictive models - to ensure they are not unfairly discriminating against consumers on the basis of a protected class. SB21-169 requires insurers to take corrective action to address any consumer harms that are discovered. The new Colorado legislation requires insurers to:

i) outline what external customer data and information sources are being used by the insurer’s algorithms and predictive models;
ii) provide an explanation of how the external consumer data and information sources, and algorithms and predictive models are used;
iii) establish and maintain a risk management framework designed to determine whether the data or models unfairly discriminate;
iv) provide an assessment of the results of the risk management framework and ongoing monitoring; and
v) provide an attestation by one or more officers that the insurer has implemented the required risk management framework.[6]

On September 27, 2023, the Colorado Division of Insurance published a draft proposed regulation to implement the new 2021 law. Entitled, “Concerning Quantitative Testing Of External Consumer Data And Information Sources, Algorithms, And Predictive Models Used For Life Insurance Underwriting For Unfairly Discriminatory Outcomes,”[7] the proposed regulation would apply only to life insurance companies who are selling life products. The regulations would address and quantify whether underwriting decisions are unfairly discriminatory based on the race or ethnicity of the applicants utilizing “BIFSG and the insureds’ or proposed insureds’ name and geolocation information.”[8]   BIFSG stands for “Bayesian Improved First Name Surname Geocoding” and is “the statistical methodology developed by the RAND corporation for estimating race and ethnicity.”[9]

Colorado will not be the only state that enacts legislation affecting insurers’ use of AI. In September 2023, the Pennsylvania legislature introduced House Bill 1663, which would mandate that health insurers disclose to health care providers, covered individuals and the general public when AI algorithms are used, not used, or will be used in the insurer’s utilization review process.

The New York Department of Financial Services issued a circular letter, “Insurance Circular Letter No. 1 (2019),”[10] that addresses usage of data and information in the life insurance sector, notifying life insurers that the NYDFS had “the right to audit and examine an insurer’s underwriting criteria, programs, algorithms, and models, including within the scope of regular market conduct examinations, and to take disciplinary action, including fines, revocation and suspension of license, and the withdrawal of product forms.”NYDFS is now following up with various insurers to discuss further investigation and investigations regarding the circular letter. 

In 2021, the Connecticut Insurance Department issued a circular to insurers.[11] The circular raised concerns about potential discrimination in usage of big data. In the bulletin, “Notice to all entities and persons licensed by the Connecticut Insurance Department concerning the usage of big data and avoidance of discriminatory practices,” the Connecticut Insurance Department wrote in part:

Having recognized the above, the Department would like to reiterate the potential for regulatory concerns with regards to the following general topics:

a. Internal data deployment: Insurers should be sensitive to how Big Data utilized as a precursor to or as a part of algorithms, predictive models, and analytic processes, including but not limited to, the purposes outlined above in items #1 and #2.

b. Internal data governance: How Big Data is governed throughout the precursor to its usage within the insurance industry, where such data resides and is used within the insurance industry, and how such data subsequently moves into industry archives, bureaus, data monetization mechanisms, or additional processes within or beyond the insurance ecosystem. The Department wishes to emphasize the importance of data accuracy, context, completeness, consistency, timeliness, relevancy, and other critical factors of responsible and secure data governance.

c. Risk management and compliance: How Big Data algorithms, predictive models, and various processes are inventoried, risk assessed / ranked, risk managed, validated for technical quality, and governed throughout their life cycle to achieve the mandatory compliance mentioned above.[12]

Connecticut followed up in April 2022 with, “Notice To All Entities And Persons Licensed By The Connecticut Insurance Department Concerning The Usage Of Big Data And Avoidance Of Discriminatory Practices,”[13] which stated in part:

“This Notice of the Connecticut Insurance Department (“Department”) is intended to remind all entities and persons licensed by the Department that the Department continues to expect such entities and persons to use technology and Big Data in full compliance with anti-discrimination laws and have completed the data certification, which shall be due on or before September 1, 2022, and annually thereafter.”[14]

California followed Connecticut closely with a similar type of bulletin, 2022-5,[15] which warned insurers in its state:

To this end, insurance companies and other licensees must avoid both conscious and unconscious bias or discrimination that can and often does result from the use of artificial intelligence, as well as other forms of “Big Data” (i.e., extremely large data sets analyzed to reveal patterns and trends) when marketing, rating, underwriting, processing claims, or investigating suspected fraud relating to any insurance transaction that impacts California residents, businesses, and policyholders.

Although the responsible use of data by the insurance industry can improve customer service and increase efficiency, technology and algorithmic data are susceptible to misuse that results in bias, unfair discrimination, or other unconscionable impacts among similarly-situated consumers. A growing concern is the use of purportedly neutral individual characteristics as a proxy for prohibited characteristics that results in racial bias, unfair discrimination, or disparate impact. The greater use by the insurance industry of artificial intelligence, algorithms, and other data collection models have resulted in an increase in consumer complaints relating to unfair discrimination in California and elsewhere.[16]

In January 2023, the Louisiana Department of Insurance issued a circular, Bulletin 2023-01,[17] to “all authorized insurers and surplus lines insurers” about the use of crime scores, and how insurers must ensure they are treating individuals fairly.

The likelihood is that additional jurisdictions will continue to monitor algorithms, predictive modelers, and big data third party tools that are designed to assist insurers in assessing various risks.

Usage of AI by Insurance Industry
The term “big data” has become a big word in the insurance industry. More and more insurers have staffed their teams with big data expertise, given the more and more complex algorithms being used in underwriting.

Insurers are increasingly using big data and AI in underwriting, claims, and customer service functions.  There are a number of uses of AI by insurers being implemented, which include:

  1. Risk assessment;
  2. Premium calculations;
  3. Fraud detection;
  4. Modeling;
  5. Claims processing;
  6. Property inspections.

With increased usage comes increased concerns about how AI might be discriminatory or too easily assessing and analyzing. Litigation and regulatory frameworks are starting to be more common in the big data and AI spaces.

Litigation Emerging
In July 2023, Cigna received notice of a lawsuit filed in California alleging that it had used an algorithm rather than actual analysis to deny health claims.[18]  According to ProPublica, Cigna “has built a system that allows its doctors to instantly reject a claim on medical grounds without opening the patient file.”[19] Cigna created a system, PxDX, that applies an algorithm to claims submitted.

The matter at issue in the present national class action is whether Cigna is in compliance with the California Insurance Code, which requires each insurer “conduct and diligently pursue a thorough, fair and objective investigation” of claims.[20] Many other states have similar rules on claims reviews.  A spokesperson for Cigna responded to the suit, stating, “PXDX is a simple tool to accelerate physician payments that has been grossly mischaracterized in the press. The facts speak for themselves, and we will continue to set the record straight.”[21]

In a 2022 lawsuit, a class action was filed against State Farm based on allegations that blacks have a harder time getting claims paid for homeowners than whites.[22] The underlying facts are that after a hail storm in Illinois, a black woman was having trouble getting the company to investigate and pay her claim. A study was conducted by a third party and showed that due to fraud flags and the process in general, blacks had to wait additional time and go through additional hurdles to get paid out on claims.  Examples of artificial intelligence making similar assessments and discriminating are emerging.  The industry will need to take steps to ensure that such results do not happen from use of AI.

Insurance Claims
In the legal industry, news emerged of a New York law firm submitting a brief to a court that had been prepared using ChatGPT. In the filing, several cases and the underlying “opinions” were cited that ended up being made up. The lawyer did not verify the work being submitted, which included what are known as “hallucinations.”  What ramifications situations like these have on professional liability policies remains to be seen, but claims will begin to be filed.

Another area that emerges from the use of AI and the internet is media liability and copyright claims. Who owns the materials generated by ChatGPT and other generative AI platforms and programs? If they scraped the internet for their programs, who must be cited? It is complex, and insurers will have to determine how best to address these emerging claims. In the Tom Hanks situation, what advertising injury and other claims might be asserted? Is there coverage? Time will tell.

One insurer, Munich Re, has offered an insurance product, AIsure,[23] that will protect the users of AI solutions if they do not perform as planned.

The NAIC Steps In
The National Association of Insurance Commissioners established the Big Data and Artificial Intelligence (H) Working Group to address issues in this area.[24] One of its main charges is:

A. Research the use of big data and AI including ML in the business of insurance, and evaluate existing regulatory frameworks for overseeing and monitoring their use. Present findings and recommendations to the Innovation, Cybersecurity, and Technology (H) Committee including potential recommendations for development of model governance for the use of big data and AI including ML for the insurance industry.[25]

The working group meets frequently and has conducted several surveys on information that insurers are using in various lines of business and implementation of AI technologies, including machine learning. 

With the proliferation of connected devices providing insurers with more and more information, reducing the actual interaction of humans on either side of the transaction, regulators are increasingly focused on the potential of AI but also the risks presented.

AI will have a major impact on the insurance industry, while presenting complex questions to consider.  Practitioners will want to keep abreast of developments to be able to advise clients more effectively. The regulators and the NAIC are increasingly focused on the area of big data and AI, and insurers using such technology should expect deeper scrutiny and regulation in coming years. Practitioners will want to continue to understand AI and ML in order to advise their insurance industry clients of how to navigate the area.






[5] Colo. Rev. Stat. § 10-3-1104.9.


[7] file:///C:/Users/cotter/Downloads/DRAFT%20Proposed%20Algorithm%20and%20Predictive%20Model%20Quan.pdf.

[8] Id.

[9] Id.



[12] Id.


[14] Id.


[16] Id.





[21] Id.




[25] Id.