Knowledge.
Experience. Results.

AI review of ERISA claims is already a problem

On Behalf of | Dec 4, 2023 | ERISA

We recently covered the use of AI in reviewing ERISA claims. (Spoiler alert: we were decidedly skeptical.) Stories are already popping up about insurers using AI to save money rather than help paying customers with their disability claims. Well, add another story to the pile.

AI doesn’t know what it doesn’t know

For some reason, it seems extra disappointing when even doctors have trouble locking down their long-term disability benefits. In Patrick v. Reliance Standard Life Ins. Co, the plaintiff, a former gastroenterologist, received benefits for ten years without issue. Then, one day, Reliance decided the plaintiff could return to her job. More accurately, Reliance’s AI case reviewing tool decided she could return to her job.

Job reclassification in ERISA case reviews to get people off benefits and back to work is a notoriously dubious facet of the process. It often involves a level of creativity reserved for speculative fiction writers. An off-the-cuff example would be sending an injured football offensive lineman back to work as the team’s kicker. It’s the same industry, but no sane person would argue that the jobs are comparable.

The medical equivalent happened to the plaintiff when AI determined that “gastroenterologist” and “internal medicine specialist” were the same job. If you’ve ever seen a gastroenterologist and the procedures they perform each day, you’d recognize this error immediately. If your general doctor decided to wing it and perform your colonoscopy, you’d be rightly apprehensive. Unfortunately, AI logic determined that since the digestive system is internal, the gastroenterologist could pivot and perform the duties of an internal medicine doctor with no additional training.

When the (human) district court judge read the case, common sense prevailed and they ordered Reliance to resume paying benefits. The appeals court upheld the district court’s ruling so fast and decisively that they didn’t even bother publishing it.

Any claim touched by AI should be suspect

It’s still early days for AI technology, but thus far the implementation of these tools in sophisticated and nuanced industries like medicine has not gone well. Moreover, humans direct these AI tools and if there’s one thing you can bet on in life, it’s that insurers’ primary goal is to make shareholders happy, not help their customers.

Anyone who has had a claim rejected should investigate whether AI was involved at any stage in their case review. If so, they may have a strong case for a lawsuit.