Abstract
Technological innovations in insurance rating allow increasingly large sets of data about insureds to be collected, analyzed, and turned into rates. Consumers’ concerns about artificial intelligence can translate into mistrust that their insurance rates accurately reflect the risk they present. In response, regulatory frameworks are being developed for testing these new algorithms. When people encounter the word “discrimination,” they often register a meaning of “unfair discrimination.” However, in the insurance context, “discrimination” is a neutral term.
This Article attempts to unpack the multiple and complex facets present in the definition of unfair discrimination—and in particular proxy discrimination—as applied to insurance, even while the regulatory framework for insurers’ use of machine learning to set rates is being constructed. Several comparisons are made across U.S. and international sources to frame the issue and its concepts. There may never be agreement on the definition of rate fairness in the context of personal insurance, but rates should be grounded in the insured’s likelihood to incur losses. Before regulators and policymakers engage in an expensive and time-consuming effort to split factors into categories that are fair or unfair in the context of big data, artificial intelligence, and machine learning, the focus should be on making sure that these new tools produce accurate rates.
Recommended Citation
Laura L. Arp,
Unfair Discrimination Standards, Actuarial Fairness, and Insurers’ Use of Big Data,
102 Neb. L. Rev. 821
(2023)
Available at: https://digitalcommons.unl.edu/nlr/vol102/iss4/4