Barely an hour passes these days without a company giddily announcing a new AI partnership to improve its performance. But does the current state and limitations of AI belong in long-term disability benefits, a realm where crucial financial security and even people’s lives may be at risk?
Prudential’s announcement
Prudential’s press release is all fairies and rainbows, as one would expect. They go to great lengths to explain how AI processing insurance claims applications will improve the process for everyone involved, namely by streamlining the most manual and tedious stages of the work.
Much like large language models that write stories, tell jokes and offer ideas for pie recipes, if AI is fed enough long-term disability data, it could theoretically be an effective first line of defense for the massive number of applications that insurance companies must manage. But that’s a monstrous “if,” which can have life-altering or even life-threatening ramifications for applicants in need of critical treatments.
The press release stresses how “blending human touch with advanced technology” will revolutionize the industry for the better. Unfortunately, we’ve already written about the shameful downsides of incorporating AI into claims reviews, after a ProPublica investigation revealed that insurance provider Cigna was using the technology to bulk-reject claims at a rate of 50 every ten seconds.
What should applicants expect?
While there is no supporting data available yet about the effectiveness or feasibility of machines evaluating someone’s capacity to work, people who have seen the ugly side of the ERISA application and appeals process have cause for concern. The repeated references in Prudential’s press release about facilitating applicants’ recovery and return to work certainly sound appealing, but this suggests that insurance providers have a role in the recovery of a sick or injured person in the first place. That’s work their primary care and specialty doctors should be doing.
We see countless cases where insurance providers deny everything from long-term benefits to asthma inhalers on the strength of a review by an in-house doctor who has never examined or even spoken to the applicant. In some instances, these doctors have no training in the medical field they’re tasked with reviewing.
Now imagine AI performing the same tasks. Prudential’s proposed model seems primed for misdiagnoses and invalid denials, simply because the applicant didn’t include a handful of important keywords the AI needed to approve the application. While reviewing initial applications may go faster, the flood of appeals may supersede any time savings, not to mention delaying desperately needed benefits. AI technology may, in the distant future, be able to reliably review ERISA claims and indeed streamline the process for insurance providers, who can then devote more time to applications that require human intervention.
However, for the moment, all evidence suggests that AI technology lacks the sophistication to do the work they say it can do. What’s more likely is that profit-minded humans will direct the technology to save the company money, not help those in need.