15 Health AI Liars Exposed—Including One That Just Raised $70M at a $0.5B Valuation (Part 2 of 2)
Time to call out the 'AI tourism' in healthcare. This isn't a glamor shot or a Silicon Valley lovefest. This is life-and-death situation.
Welcome to AI Health Uncut, a brutally honest newsletter on AI, innovation, and the state of the healthcare market. If you’d like to sign up to receive issues over email, you can do so here.
This might be my boldest piece yet—a gift to my loyal paid subscribers and Founding Members. Your support means the world to me. I have no sponsors, and after this read, you’ll see why. With every article like this, the haters multiply. Honestly, I’d rather not be the one writing this stuff, but someone has to. And where are the so-called ‘health tech journalists’? I’m not convinced they’re doing their job. Do they actually test the AI products they parrot from press releases, or even understand the basics of AI? It’s hard to tell.
This Part 2 wraps up (though let’s face it, will it ever truly end?) my investigation into 15 health AI companies. Yes, 15. I threw in a bonus for you—a little extra dose of digital dishonesty. 😊 These are companies you probably know, maybe even admire. No judgment. No one’s perfect. 😊 But let’s be clear. Some of them have been straight-up lying about their so-called AI prowess, while others haven’t even developed real AI at all. This isn’t just smoke and mirrors—it’s edging dangerously close to illegality.
Disclaimer: This investigation is entirely my own. I represent no one but myself. I have no personal stake or bias—financial or otherwise—in exposing these companies. Frankly, I’d love nothing more than for every health AI startup to deliver genuine value to the medical community. But that’s not the case here. These 15 companies chose lies over progress. My goal is simple: to discuss their deception, learn from their mistakes, and explore how we can build incentives that reward true AI innovators in healthcare—the ones putting in real work—not the ‘AI tourists’ eager to cash in and vanish into the ether.
In Part 1 of this mini-series on health AI liars, we went deep into the dark underbelly of deceit and denial that propels some of the industry’s worst actors. From “small tweaks” to blatant overreach, these companies either stretch reality to gain a “tiny edge” or are so out of touch with innovation they might as well be marketing to dinosaurs. Either way, they’re not just reckless. They’re skating the line of legality—and sometimes crossing it.
Take the mind-bending case of Ginni Rometty, who, in a masterstroke of mismanagement, single-handedly dismantled IBM’s legendary Watson research team. Practically overnight, she took a century-old American icon, an AI vanguard, and reduced it to an industry has-been, barely making a blip on the AI radar. That’s not just bad leadership. It’s a cautionary tale for anyone who thinks legacy can coast on reputation alone.
I’m incredibly grateful to the voices within those companies—both current and former—who had the courage to speak up.
Some call these startups “copycats,” others “tourist AI engineers.” But I call them what they are: “tech parasites.” In this two-part exposé, I break down how 15 digital health companies have deceived the public and gotten rich quick in the process. And it’s not just scrappy startups—Big Tech has its share of bad actors too. (Surprise, surprise!)
In digital health, “storytelling” (for wooing VC funding) has been glorified, while true innovation got kicked to the curb. That’s why healthcare is swimming in 💩.
What if I told you some of these “groundbreaking” healthcare AI companies we all know and love are just buying frozen pizza from the supermarket (aka APIing into OpenAI and foundational model providers), then serving it to you as gourmet pizza—repackaged, with a shiny front end, while never admitting the tech isn’t really theirs?
And what if I told you that venture capitalists see these AI liars for what they are but still pour hundreds of millions into them, hyping them as the best thing since sliced bread? (Apologies for the food analogies—I missed my breakfast. 😉)
This is digital health today, in a nutshell.
Transparency is everything—for customers, investors, and the medical community alike. If you’re using someone else’s voice recognition technology or AI model, own it. Don’t label yourself an “AI-powered voice assistant” if you’re neither an AI nor a voice company.
Plagiarism and “AI tourism” in healthcare must stop.
This investigative two-part series comes with three key services:
1️⃣ I’m going to assist OAGs, OIGs, and other regulators across the country by exposing AI companies that make false claims about the accuracy of their health AI products.
2️⃣ I’ll break down, statistically, why you should never trust claims of 100% or even 90% accuracy from an AI model—at least not without serious skepticism.
3️⃣ I’ll arm you with 6 key questions to ask if someone tries to sell you on “90% AI accuracy.”
Let me remind you the TL;DR of this two-part article:
1. Pieces Technologies Faced Landmark Lawsuit Over False Claims of Health AI Model Accuracy
2. OpenAI: Shoving AI into Healthcare Before It’s Ready
3. IBM Watson Health: The ‘Lean Startup’ That Went Lean Right Off a Cliff
4. Epic’s AI: Cheating on Accuracy, Cashing In on Hype
5. Babylon Health: The Madoff of Digital Health That Slipped Through the Law’s Clumsy Fingers
6. Suki: (Allegedly) Stole AI From Google, Raised $165 Million—What Could Go Wrong?
7. South Korean Study Claims 100% Accuracy Diagnosing Autism with AI—Overfitting Called, It Wants Its Hype Back
8. Google: “90% Accuracy” My A** – Distribution Shift Is a Bitch, Ain’t It?
9. Infermedica’s Accuracy Claim: When Smoke and Mirrors Replace Peer Review
10. Sniffle’s Aignosis: When Your Diagnosis is Just a ‘Sniff’ Away From Certainty!
11. Isabel: The Accuracy Claims Nobody Else Wanted to Confirm
12. MayaMD: Outperforming ER Docs... According to MayaMD’s Own Study
13. Why Visit a Doctor When Klick Labs Can Misdiagnose You With a Sentence?
14. Did ‘Medical Chat’ Just Cheat Its Way Through the USMLE? Asking for a Friend.
15. OpenAI’s GPT-4o Scores High on USMLE. But Was It Playing With a Stacked Deck?
16. The $9 Per Hour AI Nurse: How Hippocratic AI’s ‘Healthcare Revolution’ Missed the Memo on Quality
17. The ‘Whisper’ That Shouted Nonsense: How OpenAI and Nabla’s AI Went Rogue in Healthcare
18. The Boldest Health AI Lies: How GPT-4, PaLM 2, Claude, and Llama are Selling Misinformation
19. Here’s What You Should Do When Someone Tries to Sell You “99% AI Accuracy”
20. My Take
All right, let’s jump right into the saga of 15 health AI liars.
Keep reading with a 7-day free trial
Subscribe to AI Health Uncut to keep reading this post and get 7 days of free access to the full post archives.