AI or A Crime?

Lawsuits pend as health insurance companies use artificial intelligence to deny patient care
With AI becoming more mainstream, ethical dilemmas arise as healthcare companies use algorithms to make patient care decisions.
With AI becoming more mainstream, ethical dilemmas arise as healthcare companies use algorithms to make patient care decisions.
Hannah Kilian

With recent controversy surrounding AI’s ability to replace jobs and help students cheat, 12 states have passed laws to increase research into the power and potential ramifications of AI. But clear, ongoing issues surrounding the use of AI in healthcare remain unsolved. Cigna and UnitedHealthcare, both large health insurance providers, are currently being sued for using AI algorithms to deny claims. As patients’ lives hang in the balance, legislators must work quickly to put restrictions on the use of AI in healthcare, before more patients suffer.

The lawsuit against Cigna reveals just how apparent some of these violations may be, as over a two-month period in 2022 patient claims were reviewed for an average time of 1.2 seconds each. With no time to read patients’ charts, doctors aren’t really the ones reviewing these patient claims, instead, their fate is left in the “hands” of artificial intelligence algorithms. The lawsuit against Cigna, a national company, was filed in the U.S. District Court of Sacramento. Considering California state law requires healthcare providers to perform a thorough, objective review of each patient’s claim, this low figure should’ve raised some eyebrows within the company. However, it seems that questions regarding the ethicality of these algorithms may be overshadowed by companies’ desire to increase profits. 

It seems that questions regarding the ethicality of these algorithms may be overshadowed by companies’ desire to increase profits. 

In 2022, the UnitedHealth Group was left with 28.4 billion dollars in gross profit after covering all of its business expenses. Considering UnitedHealthcare’s mission statement is to “make the health system work better for everyone,” and its overwhelmingly clear profitability, eliminating a tool that may deny patients lifesaving care without human oversight would be a step in the right direction. nH Predict is UnitedHealthcare’s unique algorithm designed to predict patient care and contribute to coverage decisions. The lawsuit against UnitedHealthcare alleges that 90 percent of nH Predict’s claim denials were shown to be incorrect, often overriding doctors’ opinions regarding necessary patient care. Although UnitedHealth maintains its position that nH Predict is not used to deny care, this alleged failure rate is alarming and may point to the use of AI algorithms in a similar manner to Cigna: which allowed doctors to deny claims without reviewing patient records.

One unfortunate patient encounter reveals how unfairly denied claims impact already suffering patients. Judith Sullivan, a 78-year-old woman recovering from surgery in a Connecticut nursing home, was unfairly denied coverage by her Medicare Advantage plan (provided by UnitedHealthcare). Although she could not walk more than a few steps without help, her healthcare company determined that she was well enough to return home.

As more patients, like Sullivan, come out with testimonials about how AI algorithms like nH Predict have led to unethically denied care, hopefully, legislators will step in to enact more restrictions surrounding the use of AI. Until then, consumers should ask their healthcare providers if AI algorithms are used in the claim-decision process, and switch providers if necessary. Since companies may not be entirely forthcoming about their use of AI in claim decision-making processes, and switching providers (especially when involving Medicare), can be complex or expensive, many will have to wait patiently for federal legislators to restrict AI use. 

Although AI may be a helpful tool when used to revise essays or draft emails, the use of algorithms to determine patient care without physician oversight is incredibly dangerous, and must be regulated. While the consequences of misusing AI in an academic setting may be incorrect assignments or reprimands for plagiarism, these pale in comparison to the human suffering that occurs when doctors are replaced with AI. As consulting firms continue to push AI as a cost-effective solution in the healthcare industry, legislation must be passed to regulate AI as quickly as it is being misused.

Story continues below advertisement
Leave a Comment
More to Discover

Comments (0)

All The Howler Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *