UnitedHealthcare CEO Allegedly Deployed Flawed AI to Deny Customers Coverage

In the wake of the UnitedHealthcare CEO Brian Thompson's murder, additional details are coming to light, including the deployment of an allegedly flawed AI tool that wrongly denied customers coverage....
UnitedHealthcare CEO Allegedly Deployed Flawed AI to Deny Customers Coverage
Written by Matt Milano

In the wake of the UnitedHealthcare CEO Brian Thompson’s murder, additional details are coming to light, including the deployment of an allegedly flawed AI tool that wrongly denied customers coverage.

Thompson’s murder on December 4 sent shockwaves throughout the industry, as well as the nation as a whole. The murder, carried out as an assassination-style hit, was widely believed to be a response to the health insurance industry’s longstanding practice of denying customers care, with horror stories of people dying because they couldn’t get the care they had paid for via years and decades of insurance premiums.

Journalist Ken Klippenstein pointed to screenshots of responses to Thompson’s last LinkedIn post as examples of the alleged hypocrisy of the insurance industry, and UnitedHealth in particular.

Interestingly, as first spotted by Futurism, a class-action lawsuit against UnitedHealth shows just how culpable the company may be for customers who were denied coverage. The lawsuit accusing the company of deploying an AI models “to wrongfully deny elderly patients care owed to them under Medicare Advantage Plans by overriding their treating physicians’ determinations as to medically necessary care based on an AI model that Defendants know has a 90% error rate.”

Despite the high error rate, Defendants continue to systemically deny claims using their flawed AI model because they know that only a tiny minority of policyholders (roughly 0.2%) will appeal denied claims, and the vast majority will either pay out-ofpocket costs or forgo the remainder of their prescribed post-acute care. Defendants bank on the patients’ impaired conditions, lack of knowledge, and lack of resources to appeal the erroneous AI-powered decisions.

The fraudulent scheme affords Defendants a clear financial windfall in the form of policy premiums without having to pay for promised care, while the elderly are prematurely kicked out of care facilities nationwide or forced to deplete family savings to continue receiving necessary medical care, all because an AI model ‘disagrees’ with their real live doctors’ determinations.

The lawsuit then goes on to point out the alleged hypocrisy between what UnitedHealth touts as their goal, versus how they actually treat customers.

Defendants state that their “mission” is “to help people live healthier lives and make the health system work better for everyone.”4 In reality, Defendants systematically deploy an AI algorithm to prematurely and in bad faith discontinue payment for healthcare services for elderly individuals with serious diseases and injuries. These healthcare services are known as post-acute care.

Defendants’ AI Model, known as “nH Predict,” determines Medicare Advantage patients’ coverage criteria in post-acute care settings with rigid and unrealistic predictions for recovery.5 Relying on the nH Predict AI Model, Defendants purport to predict how much care an elderly patient ‘should’ require, but overrides real doctors’ determinations as to the amount of care a patient in fact requires to recover. As such, Defendants make coverage determinations not based on individual patient’s needs, but based on the outputs of the nH Predict AI Model, resulting in the inappropriate denial of necessary care prescribed by the patients’ doctors. Defendants’ implementation of the nH Predict AI Model resulted in a significant increase in the number of post-acute care coverage denials.

AI Without Oversight or Context

While the use of AI in the insurance industry is common, and can be used to provide valuable insights, UnitedHealth is accused of using it in a way that fails to take into account the unique circumstances of individual patients, and pushes for outcomes that are unrealistic at best, impossible at worst. As a result, patients begin to receive payment denials much sooner than they should, leaving them to either pay out-of-pocket or forego needed treatment.

Defendants wrongfully delegate their obligation to evaluate and investigate claims to the nH Predict AI Model. The nH Predict AI Model spits out generic recommendations that fail to adjust for a patient’s individual circumstances and conflict with basic rules on what Medicare Advantage plans must cover.

Upon information and belief, the nH Predict AI Model applies rigid criteria from which Defendants’ employees are instructed not to deviate. The employees who deviate from the nH Predict AI Model prediction are disciplined and terminated, regardless of whether the additional care for a patient is justified.

Under Medicare Advantage Plans, patients who have a three-day hospital stay are typically entitled to up to 100 days in a nursing home. With the use of the nH Predict AI Model, Defendants cut off payment in a fraction of that time. Patients rarely stay in a nursing home more than 14 days before they start receiving payment denials.

Upon information and belief, the outcome reports generated by nH Predict are rarely, if ever, communicated with patients or their doctors. When patients and doctors request their nH Predict reports, Defendants’ employees deny their requests and tell them that the information is proprietary.

Thompson’s Murder a Turning Point

While violence and murder are never the answer, Thompson’s murder appears poised to be a major turning point for the industry, one in which CEOs and executives fear real-world and very personal consequences to seemingly removed, abstract decisions they make that nonetheless impact people’s lives.

Already, some health insurance companies are removing information about their executives from their websites, in a move designed to prevent another Thompson-style murder.

Similarly, Anthem Blue Cross Blue Shield reversed a deeply unpopular decision to cut off payment for surgical anesthetics at the moment predefined limits for a given surgery were reached. In other words, if a surgery is estimated to take three hours, but complications force it to take an hour longer, the insurance company would only pay for the original three hours of anesthesia, leaving the patient to foot the bill for the rest. Fortunately, the backlash from the announcement was severe enough to force Anthem to backtrack, but it’s telling the company thought it could get away with such a policy in the first place.

The entire UnitedHealth/Thompson debacle illustrates why there needs to be better regulation to force insurance companies to treat their customers as human beings and provide the coverage those customers are paying for. The company’s use of AI also illustrates the need for companies to ensure they use AI models in an ethical and legal manner.

Subscribe for Updates

CEOTrends Newsletter

CEO related news and updates.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.
Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us