ClaimScore has evolved into Covalynt: A full-service data science platform.
ClaimScore has evolved into Covalynt.
ClaimScore is now Covalynt.
Watch Video
Insights

Are Your Settlements Being “AI-Washed”?

Rampant, bot-scale fraud on class action settlements has prompted claims administrators to say they use “AI-driven fraud detection." But these “AI washed” services could be masking legacy, manual fraud detection practices or, worse, deploying Large Language Models (LLMs) in a way that creates legal and ethical liability for attorneys.

In the wake of rampant, bot-scale fraud on class action settlements, a single buzzword has arisen: AI. Promises of "AI-driven fraud detection" to protect settlement funds have become very popular. But, underneath the hood, are these just “AI washed” services that deploy Large Language Models (LLMs) in a way that creates legal and ethical liability for the attorneys who hire them? Or, are they really just the same old legacy, manual fraud reviews that serve little to no purpose in the age of big data?

The "Black Box" and the Deposition Trap

Large Language Models are impressive at drafting emails, but they were never designed for the precision of legal administration. When an LLM is used to generate a "fraud confidence score" or determine class eligibility, understand that it is a non-deterministic tool. LLMs are probabilistic; they don't follow a fixed, repeatable logic path.

This creates a "Black Box" problem in the courtroom: if a legitimate plaintiff is denied their settlement benefits, how does lead counsel defend that decision? You cannot depose an LLM. You cannot ask a "black box" algorithm to explain its logic to a judge or a special master.

In modern litigation, every denial must have a "receipt": a deterministic audit trail. If your solution can’t provide the "Why" behind a "No," the settlement's defensibility is at risk.

Violating State Laws: The ADMT Red Line

The move toward automated "black box" denials isn't just a technical flaw; it’s increasingly a legal violation. Recent Automated Decision-Making Technology (ADMT) rules in states like California and Colorado means that if an automated system is used to make "consequential decisions,” such as the denial of a financial claim, there is a requirement to provide transparency – a plain-language explanation of the logic, and a meaningful path for human appeal.

AI-washed systems that hide behind LLM-generated algorithms to reject claimants are likely in violation of these consumer protection laws, potentially exposing lead counsel to professional liability and fee cuts.

Why Data Science Beats LLMs for Fraud

The irony of the LLM hype is that LLMs aren't even the best tool for stopping fraud. At Covalynt, we’ve found that the only way to beat bot-scale fraud is with Deterministic Data Science, not probabilistic language models.

  • Identity Resolution vs. Inference: Instead of "guessing" if a claim is valid based on how a person types, we use persistent historical data and digital identity markers to tie a claim to a real person.
  • White Box Logic: Our systems provide a "receipt" for every decision. Every approval or denial is based on repeatable, documented logic that is 100% defensible in a courtroom.
  • The "Expert Hour": We use automation to handle the administrative tax of sorting data and verifying identities so our human experts can focus on the complex edge cases that require empathy and judgment.

The Bottom Line

Absent class members deserve a process that is fair, and attorneys deserve a process that is defensible. AI-washing is a strategy of desperation—an attempt to mask 1990s workflows with 2026 jargon.

At Covalynt, we don't believe in or use “Black Box" tools. We use white box logic, fueled by substantial data sets that we keep and maintain, ensuring that every decision is grounded in data science, not linguistic "inferences."