Might AI make auto insurance rates less fair?

Print More
Getting your Trinity Audio player ready...

What goes into the calculation of your car insurance premium? The underwriting and rate calculations auto insurers use are more discriminatory than you might expect. Studies have shown that across the country, low-income individuals and people of color are disproportionately affected by high insurance prices—even safe drivers.

Maia Hinesley-Saunders

As a government-mandated expense, buying car insurance can be a necessary yet unaffordable burden for millions of working Americans. And prices are going up: According to the U.S. Bureau of Labor Statistics, the average urban consumer spent almost 12 percent more on car insurance since last January. With costs skyrocketing, insurers need to be scrutinized for their price-raising algorithms. A typical driver should not pay a rate higher than their neighbor of a wealthier suburb, all other factors being equal. Individual drivers should be paying rates based on their own driving merit, not on based on their location, income, or race.

The methods insurers use to evaluate policy holders are often murky and discriminatory. With access to limitless demographic data and statistics from myriad sources, your information is wholly available to insurance companies to use in their pricing determinations. Coupled with the prevalence of artificial intelligence, consumers are increasingly at risk for exploitation. The addition of AI to underwriting methods only exacerbates a decades-long problem of opaque rate calculations in the industry. Auto insurance reform—especially regarding ZIP code risk calculations—is a crucial social justice issue that demands immediate attention from lawmakers.

Insurance companies use historical data to predict how likely an individual is to make a claim on their policy. The calculation of your insurance premium is done with an algorithm, or system of calculations, that considers factors like credit score, ZIP code, age, and even whether you rent or own your home. These risk assessment algorithms take those factors and assign weight and value to them in the calculation of your premium. For instance, age is a more heavily weighted factor for drivers under 25, for whom rates are higher as they are less experienced drivers and, statistically, more likely to be involved in motor vehicle collisions. A fair rate might consider limited driving telematics like acceleration and braking; relevant driving history; and certain data on car accidents in the area. Importantly, location-tracking and assessing drivers based solely on their neighborhood or area verges on discriminatory.

This algorithmic risk assessment becomes problematic in its consideration of your place of residence, often captured by your ZIP code, where the demographics, income level, and crime data of your area (among other things) are considered by your insurance company. Your ZIP code might effectively predict your rate. According to a Consumer Federation of  America report, in communities where at least three quarters of residents were African American, premiums averaged 71 percent higher than in areas of comparable white populations. What’s more, in metropolitan areas with long histories of redlining and segregation, premiums charged to residents in predominantly non-white ZIP codes were nearly double those of largely white ZIP codes. Across New York State, drivers who live in predominantly non-white ZIP codes pay $1,728 more in average annual premiums.

The Rochester area’s history of redlining has resulted in significant ZIP code segregation; thus, Rochesterians of color/low-income community members are disproportionately at risk for insurance price-gouging.

When insurers designate a ZIP code as “high risk,” a negative feedback loop is perpetuated in which low-income community members end up paying higher premiums. The community becomes systematically undesirable, enticing wealthy residents to leave for lower rates. Because factors like credit score and demographics contribute to a “high risk” ZIP code, this system disproportionately targets communities of color for price hikes.

The actual algorithms that insurers use are either completely opaque or are kept mostly concealed from the public or oversight agencies as proprietary secrets. A 2020 Consumer Reports study noted that Allstate kept the details of its “price optimization” algorithm explicitly out of many of its filings for state regulators. The ‘black box’ that is ZIP code risk calculation is intentionally hidden to allow insurers great leeway in their pricing schemes. Without transparency in how a company evaluates ZIP code risk, lawmakers have no way to determine whether an algorithm violates fundamental antidiscrimination statutes.

Two potential aspects of risk calculation are cited as alternatives to the use of ZIP codes: AI telematics and crime rates. The introduction of AI-based driver metrics poses a potential risk to low-income drivers and people of color. In some cases, auto manufacturers have shared consumer telematics with insurers without the driver’s knowledge. These telematic collecting devices monitor driving habits—like destination, acceleration, speed, and braking. With what we know about AI in other industries so far, it’s hard to say how the collected information is being used, or whether its capabilities would perpetuate discrimination. However, AI’s use of historical data to predict risk may likely perpetuate racial bias and classism. Destination tracking, for example, can lead risk algorithms to increase premiums for drivers perceived to be frequenting “high risk” areas.

To counter this issue, the CFA released a report detailing key preventative measures that state legislatures could enact. Lawmakers should incorporate these suggestions—such as requirements that “insurers [must] demonstrate and explain the actuarial basis for the data to be collected and used”—as an initial framework to mitigate the dangers of AI and provide consumer transparency.

Further, telematic information has great potential to act as a rate equalizer—collecting data on individual braking and acceleration can be incredibly useful for insurers in building driver records. If insurers use telematics to determine rate, driving habits should be favored over perceived risks based on ZIP codes or credit score. It is imperative, however, that specifics like driving destination and local income be excluded or regulated to prevent discrimination based on wealth as a proxy for race.

While ZIP codes are known proxies for class and race, insurance companies attest that they already depend on crime data to make their risk underwriting calculations as it is tied to ZIP code data. Theoretically, crime data is reasonable to use in rate calculation. However, as Cathy O’Neil decisively demonstrated in her 2016 book, “Weapons of Math Destruction,” crime rates themselves are often problematic algorithms. The feedback loop of highly policed, poor areas creates data that draws police back to those areas and cements their high risk statuses. Thus, the use of either AI or crime data necessitates a greater degree of regulation by lawmakers and consumer vigilance.

Fortunately, New York is already taking steps to mitigate the bias and systemic perpetuation of discrimination arguably inherent to AI’s incorporation. A 2024 NY Dept. of Financial Services report stated that external consumer data and information sources and artificial intelligence systems should be monitored for “unfair and unlawful discrimination” by insurers. Similarly, the National Association of Insurance Commissioners  (a non-governmental standard- setting organization) recommended an increase in “oversight of unregulated big data and vendors of algorithms currently used to establish pricing.” Additionally, insurers could mandate defensive driving courses for policyholders whose rates are unaffordable instead of raising prices for poor drivers.

As consumers, we can pressure our elected officials to pursue oversight of the insurance industry at all levels of government. Additionally, appropriating more funds to consumer protection agencies is a crucial step in curbing illegal discriminatory practices. Consumers should have the right to understand how their ZIP code is being evaluated so that they can make informed decisions. Insurers must be transparent in their underwriting methods, especially in using AI and ZIP codes. Stopping the unfair and opaque use of ZIP code data in risk calculations should be an immediate priority to achieve equity in fees.

Further reading: The Opportunity Atlas; Colorado’s AI mitigation law

Maia Hinesley-Saunders is a Rochester native currently enrolled at Smith College.

The Beacon welcomes comments and letters from readers who adhere to our comment policy including use of their full, real nameSee “Leave a Reply” below to discuss on this post. Comments of a general nature may be submitted to the Letters page by emailing  [email protected]

6 thoughts on “Might AI make auto insurance rates less fair?

  1. Having been plagued by these algorithms myself, this article spoke to me, and having it stated so cogently by Ms. Hinsley-Saunders as she is just preparing to launch her career gives me hope in our future. Keep investigating and asking questions. Keep writing. Keep growing. I can’t wait to see where this is going to take you!

  2. “A typical driver should not pay a rate higher than their neighbor of a wealthier suburb, all other factors being equal.”

    Are we talking comprehensive,liability, or both types of insurance?

    A well stated argument for liability insurance, yet all other factors are not equal for comprehensive insurance. If I live in an area where there is high crime the likelihood I will file a claim for damage to my vehicle, be it as a result of theft, vandalism, or a break-in, is greater than where there is less crime. Is it fair for someone living in a less risky area to pay for the risks someone has in another?

  3. A very good question and one – with AI vs Algorithms – is harder to answer. An Algorithm can be inherently and precisely analyzed and forms of bias identified. Not so much with LLMs which just predict the most likely outcome based on training data. And that relies a LOT on what that data is and how the trainers apply it to the model – scoring any specific type of information higher or lower in priority and providing it with the information in the first place. AI is the RESULT of both an algorithm (the methodology used to train the model) and the core date that the algorithm acted on. So without transparency of both of these pieces – you have no idea what sorts of results are going to come out – less idea than with an algorithm alone. And that’s a core problem no matter what sort of AI you’re using.

    That said, the current governments reluctance to impose need for transparency in the algorithmic part of computing and risk analysis will almost certainly continue (especially with the current administration) into the future as it applies to AI. Our government has so far been reluctant to even protect our privacy from invasive corporate systems, muchless regulate how algorithms might be applied to costs – because they know that any such transparency will reveal just how unfair those systems are to some. They hide it under “proprietary” rights and “improving competition” but these systems affect all of us and our costs – auto insurance being a small part of that – health insurance and that pricing is also subject to this, energy costs, etc. AI has some very interesting and potentially beneficial uses – but without transparency it can also be used to hide unsavory activity.

  4. I am so impressed with this reporting work by Maia Hinesley-Saunders! She received the 2023 Hart-Thomas Scholarship that I established at Brighton High School, and this shows that she deserves every penny. I look forward to seeing more of her writing here and elsewhere.

  5. So great to see this article Maia! Your selection as the BSAA 2023 Hart-Thomas Scholarship recipient for promise as a writer was well founded. Keep up the excellent work.
    Steve Gaudioso, President
    Brighton Schools Alumni Association

Leave a Reply

Your email address will not be published. Required fields are marked *