The Regulatory Clock on AI Claims Is Already Running.

By April 27, 2026Industry Insights
The Regulatory Clock

A storm is forming at the intersection of automated claims decisions, state insurance regulation, and consumer litigation. The F&I industry needs to be ready before it arrives, not after.

T he lawsuits that shook the health insurance industry in 2023 and 2024 were easy to dismiss if you’re in automotive F&I. Different product. Different regulatory framework. Different patient stakes.

That’s a comfortable position to hold. It’s also wrong.

The legal and regulatory machinery that came down on health insurers for opaque AI-driven claims decisions is not staying in its lane. It’s moving…. state by state, regulator by regulator, toward every corner of the insurance industry that uses algorithms to influence coverage outcomes. That includes vehicle service contracts. That includes GAP. That includes every ancillary product your dealers are selling in the F&I office today.

If you’re using AI in your claims process, or planning to, and you haven’t built the governance infrastructure to support it, you are accumulating regulatory and litigation risk right now. Quietly. Without a warning label.

What Happened in Health Insurance Is a Preview, Not an Outlier

In late 2023, class action lawsuits were filed against two of the largest health insurers in the country. The allegation in both cases was essentially the same: AI systems were being used to deny claims at scale, without meaningful human review, and without any transparency to the consumers being denied.

UnitedHealth’s nH Predict algorithm was alleged to have an error rate approaching 90% on the claims it was processing — and a federal court allowed the case to proceed in early 2025. Cigna’s PXDX system, according to a ProPublica investigation, enabled physicians to reject over 300,000 claims in two months, spending an average of 1.2 seconds per case. Sources: CBS News, November 2023; ZwillGen Legal, December 2025; Bloomberg Law, April 2025

These cases didn’t just expose bad actors. They exposed an assumption the entire industry had been operating under: that efficiency gains from AI automation were worth the governance shortcuts. Regulators disagreed. And they moved fast.

AI-related class action litigation more than doubled in 2024 versus the prior year. Source: Risk & Insurance, October 2025

The Regulatory Landscape Has Shifted — Permanently

In December 2023, the National Association of Insurance Commissioners adopted its AI Model Bulletin, the clearest signal yet that state regulators intend to hold insurers accountable for every AI-influenced claims decision. By March 2025, 24 states had adopted it. The direction of travel is unmistakable. Source: Quarles Law, March 2025

The bulletin isn’t guidelines. It’s a governance framework with teeth. It requires documented oversight of every AI system used in claims, underwriting, or fraud detection. It requires explainability, the ability to show, for any given decision, what data drove the output and why. It requires bias testing. And it puts insurers on notice that they are responsible for third-party vendor AI systems, not just the ones they build in-house.

Individual states are moving further and faster than the national framework. New York’s DFS Circular Letter 2024-7 requires insurers to demonstrate that AI systems don’t proxy for protected classes and gives regulators the right to review vendor tools directly. Colorado expanded its AI discrimination statute to include private passenger auto insurance, effective October 2025. California’s SB 1120, in effect since January 2025, prohibits coverage denials based solely on automated tools without licensed human review. Sources: Buchanan Ingersoll & Rooney, October 2025; Fenwick, December 2025

At least 17 states introduced advanced AI-specific insurance bills in 2025 alone. The Senate voted 99-1 to preserve state authority to regulate AI, meaning this patchwork of state requirements isn’t going away. It’s growing. Source: Baker Tilly, August 2025

Waiting for enforcement is not a strategy. It’s an exposure.

Why F&I Administrators Are More Exposed Than They Realize

Here’s the part of this conversation that doesn’t get enough attention in the automotive space.

The regulatory scrutiny building around AI in insurance isn’t limited to health care because health care has the most visible human stakes. It’s building everywhere that algorithms influence whether a consumer gets paid on a claim they believe they’re entitled to. VSC claims. GAP claims. Claims on products sold by dealers to consumers who often don’t fully understand what they purchased and who, when denied, feel it acutely.

That’s a consumer protection profile that regulators are paying attention to. And F&I administrators who are running AI-assisted adjudication without documented governance frameworks, explainable decision logic, human review protocols for adverse outcomes, and bias testing are sitting on the same exposure that brought the health insurers into court.

The fact that VSC and GAP aren’t regulated identically to health insurance doesn’t mean they’re unregulated. Unfair Claims Settlement Practices Acts apply in every state. Consumer protection statutes apply. And class action attorneys are not waiting for a perfect regulatory hook before filing. They’re looking for patterns; high denial rates, opaque reasoning, consumers who couldn’t understand why they were denied, and those patterns exist in F&I just as they existed in health insurance.

92% of auto insurers report current or planned AI usage — but nearly one-third of insurers still do not regularly test their models for bias or discrimination. Source: NAIC Big Data and AI Working Group Survey, 2025

Governance Isn’t a Compliance Tax. It’s What Makes the AI Work Long-Term.

I want to push back on the framing that positions compliance as something that slows down AI adoption. That framing is backwards, and it leads to decisions that create real organizational risk.

The governance requirements that regulators are pushing, explainability, audit trails, human review for adverse decisions, bias testing are also the requirements that make an AI claims system trustworthy enough to scale. You can’t run a tiered autonomy model on a black-box system. Your own adjusters won’t trust it. Your dealers won’t accept it. And now, your regulators won’t allow it.

The administrators who use explainable AI into their claims process from the start aren’t just managing regulatory risk. They’re building the infrastructure that makes high-volume, high-confidence automated adjudication possible. Every decision trace, every bias audit, every human review protocol; those aren’t compliance costs. They’re the architecture of an operation that can defend every decision it makes and scale without exposure.

Three Things to Do Before the Regulator Calls

You don’t need to overhaul everything at once. But there are three things that every F&I administrator using AI in claims should have in place right now.

A documented AI governance framework

Every AI system touching claims decisions should be catalogued: what it does, what data it uses, what decisions it informs or makes, who owns it, and how it’s monitored. This is the minimum the NAIC Model Bulletin expects and the minimum that protects you when a state examiner asks to review your AI practices.

An explainable decision record for adverse outcomes

Any claim that’s denied or modified by an AI-assisted process needs a decision trace that can be shown to a regulator or reviewed in litigation. Not just a denial code — an actual record of what the system saw, what rules it applied, and what the reasoning was. If you can’t produce that, you don’t have a defensible AI claims process. You have liability.

A human review protocol that’s real, not performative

Regulators are specifically looking for whether human oversight is meaningful or theater. The Cigna case was partly about physicians approving AI denials in 1.2 seconds on average; technically human review, practically none. Design your review workflows so that the human is evaluating the AI’s reasoning, not rubber-stamping it.

The regulatory environment around AI in claims is not coming. It’s here.

How PCMI’s Claims Intelligence Was Built for This Moment

data IQ

PCMI didn’t build Claims Intelligence as a bolt-on feature. It was designed from the ground up around the exact governance requirements that regulators are now demanding, because we knew this moment was coming before most of the industry started paying attention.

Claims Intelligence is built on three pillars that work together to deliver speed, control, and full auditability across every claim your operation touches.

1 Automation IQ

Automation IQ is the rules engine at the center of it. Every authorization, payment, and denial decision is driven by configurable, product-specific logic. Built and managed centrally by your team, without code changes or redeployment. You control the guardrails. The system executes within them. And every decision it makes is logged with the reasoning behind it, reviewable at any point by your team or an external examiner.

That’s not just compliance posture. It’s architecture that lets you scale automation without losing control. When a regulator asks why a claim was denied, your team has an answer. When an adjuster questions a decision, they can see exactly what the system evaluated. Transparency isn’t an afterthought; it’s how the platform works.

The impact numbers are real. Across payment automation alone, Claims Intelligence delivers $90K in annual cost savings and 5,000 hours reclaimed per 100,000 claims; with $0.90 saved per claim and a 3x productivity boost. On authorization, the numbers go further: $210K in annual savings, 11,000 hours reclaimed, and an 85% reduction in authorization processing time per 100,000 claims.

$300K+ combined annual savings per 100K claims across payment and authorization automation

16K hrs reclaimed annually per 100K claims — returned to complex, high-value work

85% reduction in authorization processing time

3x productivity boost on routine claims processing

Source: PCMI Claims Intelligence product data, pcmicorp.com

2 Insights IQ

Insights IQ — coming soon — takes Claims Intelligence further. Built on what will be the industry’s richest F&I dataset, it converts historical claims outcomes into predictive intelligence: real-time trend detection, risk surfacing, and decision support that gets smarter with every claim processed on the platform. This is the layer that turns compliance into competitive advantage.

3 Workforce IQ

Workforce IQ — also coming — closes the loop by routing claims intelligently based on complexity, adjuster expertise, and real-time workload. The right claim reaches the right person. SLAs are protected. And your best adjusters spend their time on the work that actually needs them.

See It in Action

If you want to see what a governance-first, audit-ready AI claims platform actually looks like in practice — Claims Intelligence is purpose-built to show you. We’ll walk through your current workflow, and show you exactly where automation can accelerate your operation without creating the exposure your competitors are quietly accumulating.

The best compliance posture is the one built before the examination. Claims Intelligence was designed to be exactly that.