select
This is a tooltip for the edit command button

BIG DATA IS A BIG DEAL (FOR INSURERS, REGULATORS AND MAYBE PLAINTIFFS’ ATTORNEYS)

Admit it: you have a nagging feeling that you should learn more about the insurance industry’s use of big data and algorithms, but you don’t know where to begin.  Relax.  Here are 6 questions (and answers) to help you get started.   

  1. What’s the big deal with big data and algorithms?  

    Insurers are collecting vast amounts of data and then using mathematical models called algorithms to make important decisions based on the data.  The data may come from a variety of sources, such as social media, telematics devices in cars, public records, motor vehicle reports, credit information and wearable devices (like Fitbits).  Some insurers develop their own algorithms, while others purchase algorithms from third party vendors.  

    The allure of big data and algorithms is that they enable insurers to discover previously unknown connections, patterns and trends, which can facilitate better (and faster) marketing, underwriting, rating and claims decisions and root out potential fraud.  Read that sentence again, because it explains why the industry is pouring millions of dollars into insuretech.  

    All of this is pretty cool, but there are significant regulatory, litigation and reputational risks to keep in mind.  In particular, using big data and algorithms can reinforce and perpetuate discrimination in ways neither intended nor foreseen by your (or your client’s) data team.

  2. What do regulators think?

    Insurance regulators want to be supportive of innovation, but they also want to make sure that innovation doesn’t harm consumers or violate existing laws.  As Wisconsin Insurance Commissioner Ted Nickel explained at the massive InsureTech Connect conference held last fall, “From a regulator’s perspective, you need to understand how things work.  Your mind goes to dark places when you don’t understand.”

  3. What could possibly go wrong?

    Several things.

    Data can be incomplete, inaccurate or outdated.  It also can contain embedded bias in ways that are not always obvious.  For example, let’s say that we want to develop a computer algorithm that would identify potential presidential candidates scientifically and objectively, free of the biases and prejudices that can creep into human decision making.  A reasonable place to start would be to agree on some definition of success (like growing the economy, backing landmark legislation, keeping us safe, whatever).  We would then look at the data on past presidents, determine which presidents met our definition of success, identify characteristics that all or most of the successful presidents had in common, and use these predictors of success to identify potential candidates worthy of consideration.  Guess what?  No matter how we define success, the computer will almost certainly tell us to look for a white Protestant man, because of the historical biases embedded in our data.    

Sometimes the problem lies with the algorithm, rather than the data.  Algorithms may fail to produce results that are reliably accurate.  (Even an algorithm that’s right much of the time won’t always be correct.)  Other algorithms are so complex that the developers may not be able to tell regulators or consumers how a particular decision was reached.  In addition, algorithms may base their decisions on race or other factors that cannot lawfully be taken into account.  (The challenge is that certain data that feeds into the algorithm could be highly correlated with race or another prohibited factor, but the correlation wouldn’t necessarily be evident without testing.)

These problems with data and algorithms could harm consumers, and the harm could be widespread. 

 
  1. How has the National Association of Insurance Commissioners responded?

    The concerns articulated by Commissioner Nickel last year are precisely why the NAIC formed a Big Data Working Group.  The mission of the Working Group is “to assist state insurance regulators in obtaining a clear understanding of what data is collected, how it is collected, and how it is used by insurers and third parties in the context of marketing, rating, underwriting, and claims.”  The initial focus is on auto and homeowner’s insurance, but regulators have made it clear that they will soon look at other lines.  

    One of the first things the Working Group did was to research the existing laws addressing insurers’ use of consumer and non-insurance data, particularly as it relates to rating and claims handling.  Key findings included that:
    • Insurers cannot refuse to insure or limit the amount of coverage available to an individual because of the sex, marital status, race, religion or national origin of the individual.
    • Rates can’t be excessive, inadequate or unfairly discriminatory.  (A rate is unfairly discriminatory if differences in price do not fairly reflect differences in expected losses and expenses.)
    • Risk classifications cannot be based on the race, creed, national origin or the religion of the insured.

    The Big Data Working Group will consider whether additional consumer protections are warranted, but the key takeaway is that existing law provides regulators with the basic tools they need to address insurers whose data or algorithms run amuck.

  2. What’s the future hold?

    Regulators’ questions and concerns about big data and algorithms are not going away.  Sooner or later, the NAIC will come up with a way to protect consumers without overly stifling innovation.  And, if the regulators are slow to act, there’s a good chance that the plaintiffs’ lawyers will make some noise of their own.

    It’s possible that states will enact data privacy legislation that, while not specifically aimed at insurers, nevertheless could significantly impact their ability to collect and use consumer data.  The California Consumer Privacy Act of 2018 (signed into law on June 28) is a good example, as many are calling it the strictest online privacy law in the country.  Depending on how the mid-term elections shake out, it’s even possible that Congress could take up legislation (perhaps inspired by Europe’s GDPR or the latest Facebook revelation) that could be broad enough to impact insurers.  

  3. What should insurers do?

    First, know what the law requires and keep up with developments at the NAIC and elsewhere.  

    Second, take a hard look at your company’s use of data and algorithms.  Deficiencies in data and algorithms present regulatory, litigation and reputational risk, but can be difficult to detect.  We think there’s enough risk here that insurers should consider independent testing and validation of their data and algorithms to identify any problems before they come home to roost.

    Here are some of the questions that insurers should consider asking themselves:
    • Are we permitted to use the information that we’re collecting? 
    • Is our data accurate, complete, up-to-date and free of embedded bias?  
    • Does our algorithm produce reliably accurate results?
    • Do the results make sense?  Can we explain them in a way that regulators and consumers will understand?
    • Are we monitoring the performance of our algorithm to make sure that it continues to operate as intended?
    • Does our algorithm comply with the law?  In particular, does it include proxies for race or other factors that cannot lawfully be taken into account?

    That’s probably enough for now but stay tuned.  There’s surely more to come.