Why Is Shitty, Dangerous Health AI Being Pushed by Multi-Billion-Dollar Conglomerates?
In April 2020, amid the pandemic's chaos, Cigna passed on my pitch for an AI solution that could've aided thousands of COVID patients. Their reason? "We just hired 200 data scientists of our own."
In April 2020, amid the pandemic’s chaos, Cigna passed on my pitch for an AI solution that could’ve aided thousands of COVID patients. Their reason? They’d just onboarded 200 data scientists to craft their own innovations, brimming with anticipation for their self-proclaimed “great” creation. By the year’s end, they dubbed this expanded team ‘Evernorth’, focusing on “health service solutions”. Fast forward 3.5 years, and the bitter truth surfaces: Cigna has yet to deliver a single AI product, let alone anything for COVID patients.
During this time, my team and I were knee-deep in developing the WellAI AI Medical Research Tool. We were collaborating with genetics researchers worldwide, leveraging the tool’s potential to tackle questions beyond human capabilities. We sought to guide medical researchers in pinpointing focus areas critical in combating COVID. Specifically, this AI tool possessed a unique capability, unrivaled by any other machine or human: it could determine the optimal focus area for various medical professionals in the fight against COVID. Whether you’re a genetics scientist, a behavioral psychiatrist, or any other type of medical researcher, this tool could pinpoint the exact area of your expertise most critical for combating the pandemic.
Our pitch to Cigna, naturally, was tailored differently. Understanding their need for an AI solution in their wellness platform, we believed our AI’s ability to digest, summarize, and offer preliminary insights on vast amounts of medical data would be invaluable, especially during a health crisis of such magnitude.
Yet, the core question posed to us by Cigna’s data science leaders wasn’t about the app’s specifics, the system, the AI, or how we differed from the likes of WebMD. It was, “Don’t you think our newly hired 200 data scientists could replicate and surpass your AI tool?” That question threw me off then, and it still does. It’s a misconception to think that simply amassing data scientists can solve complex problems. Our team had dedicated over a year to perfecting our solution, with the majority of that time spent on refining the dataset alone. There’s a relevant saying in engineering, which holds true for data scientists as well:
“What one engineer can solve in a week, two engineers will take two weeks to solve.”
This reflects a crucial insight that startup founders understand but corporations often overlook: deploying 200 data scientists to tackle a problem, as opposed to just 2, doesn’t guarantee that the problem will be resolved 100 times faster or more effectively. The dynamics of team size and productivity don’t scale linearly. In fact, often, especially in a bureaucratic setting, they don’t scale at all.
Sure enough, those 200 Cigna data scientists haven’t produced a single AI solution in 3.5 years!
And as a side note, how come no one has ever heard about the 400 ChenMed data scientists, never saw a single publication, and never met a single one of them at the AI and data science conferences? Are they not allowed to leave the ChenMed basement? 😂 (Reference: Chris Chen, ChenMed’s CEO, presentation on June 24, 2022, especially from minute 25:40.)
Joining a bureaucratic structure often means losing sight of one’s impact and motivation, becoming just another cog in the machine, performing tasks as instructed, no more, no less.
Frankly, I’m exhausted discussing tech failures in healthcare – it’s a topic I’ve covered extensively, yet the setbacks keep emerging. Nonetheless, these AI missteps by healthcare giants and the government are crucial to highlight and rectify.
(Quick side note: This article is just a fragment of the comprehensive report I’m preparing, titled “Digital Health 2024: Don’t Miss These 50 Names,” coming up in the next few weeks. Stay tuned!)
Anyway, let’s delve deeper. Here’s the outline of this article:
1. AI Failures by Multi-Billion Dollar Healthcare Corporations and the U.S. Government
1.1. A bug in the 2016 Medicaid’s AI algorithm cut off aid to 8,000 elderly and people with severe disabilities.
1.2. A 2019 UnitedHealth AI algorithm prioritized care for healthier white patients over sicker Black patients, impacting over 100 million patients.
1.3. Cigna sells patients’ lab results and claims information to WebMD.
1.4. Cigna’s AI stress tool: shitty and potentially hazardous.
1.5. Cigna’s AI algorithm automatically denies claims. Cigna “doctors” concur without even looking.
1.6. Epic added AI-based voice assistant Suki to its EHR. Bad tech meets worse tech. Mind-boggling.
1.7. UnitedHealth pushed employees to follow Optum’s NaviHealth algorithm to cut off medicare patients’ rehab care.
2. So, Why Is Shitty, Dangerous Health AI Being Pushed by Multi-Billion-Dollar Conglomerates?
2.1. The Monopolistic Might: Complacency and Neglect in Big Healthcare
2.2. The AI Quagmire: Profit Over Patients
2.3. Too Big to Fail, Too Regulatory-Capture Driven to Care
2.4. Health Costs Are Rising and Health Conglomerates Are Consolidating
2.5. In a Bureaucracy Where Everyone Does Everything, No One Takes Responsibility for Anything
2.6. Healthcare Mega Corporations Don’t Give a Shit About Best Practices for AI and AI Ethics
2.7. The Boardroom’s Frenzy for AI
2.8. The Prior Authorization Mess
2.9. The Collateral Damage: Patients and Medical Providers
2.10. Stifling Innovation: The Shadow Over AI Startups
3. My Take
Brace yourself for a no-holds-barred voyage into the labyrinth of AI debacles, orchestrated by some of the biggest middlemen in healthcare history. We’re about to take a thrilling, eye-opening leap into a world where technology meets Epic (no pun intended 😂) missteps:
Keep reading with a 7-day free trial
Subscribe to AI Health Uncut to keep reading this post and get 7 days of free access to the full post archives.