Why Hippocratic AI and Nabla Can Sleep Easy: Microsoft and Google Are Still Far from Building a Healthcare AI Agent
Microsoft's so-called Medical Superintelligence is a joke. No theory, no real reasoning. Just more brute-force compute duct-taped to a buzzword.
Welcome to AI Health Uncut, a brutally honest newsletter on AI, innovation, and the state of the healthcare market. If you’d like to sign up to receive issues over email, you can do so here.
🎉🎇 I hope everyone had a great 4th of July weekend. 🇺🇸 For the first time in years, I skipped the skirt steak and settled for burgers and hot dogs 🍔🌭—and honestly, no regrets. 😏
First of all, if you cannot afford this article—perhaps you’re a student or currently between jobs—please reach out. That’s precisely why I created the AI Health Uncut Founding Member Club. Thanks to generous donations and support from these wonderful individuals, I’m able to provide access to anyone who needs it. By the way, students reach out frequently, and I’m always glad to help.
If you’d like to become a Founding Member of the AI Health Uncut community, you can join through this link. You’ll be making a real impact, helping me continue to challenge the system and push for better outcomes in healthcare through AI, technology, policy, and beyond.
Last week, Microsoft released a flimsy, NON–peer reviewed paper outlining their work in the healthcare AI agent space. The project has the completely undeserved name: The Path to Medical Superintelligence.
Let’s be clear. “Superintelligence” is the new shiny buzzword, and Microsoft knows exactly what it’s doing. AGI (Artificial General Intelligence) was the buzzword for the past couple of years. But as the cool kids say, that’s so June 2025 🙄. Now, the trend has moved on to ASI, or Artificial Superintelligence — yet another buzzword the AI nerds came up with. 😊 If AGI was supposed to match human intelligence, ASI is supposed to surpass it.
https://youtube.com/shorts/r1UDwlkL5oE
Don’t worry, Hippocratic AI and Nabla. We’re not even close. Despite what Microsoft and other corporations with a vested interest in inflating the AI hype cycle want you to believe, medical superintelligence is still science fiction. Likely for decades.
And this isn’t superintelligence. It’s defiantly not ready for primetime. Hell, it’s not even ready for Saturday morning cartoons. 😂 Microsoft’s own researchers admit: “We do not know whether MAI-DxO’s performance gains on hard cases generalize to common, everyday clinical conditions, and could not measure false positive rates.”
Translation: this is just another model, untested in real-world clinical practice. But hey, let’s throw around “superintelligence” and a few other buzzwords. Why not?
I read the damn paper, and I’m somehow less convinced about the future of AI agents in healthcare than I was before.
Here’s my question: Can someone explain how the hell Microsoft plans to build sequential AI agents for healthcare?
In theory, Hippocratic AI and Nabla should be scared shitless of Microsoft’s Medical Superintelligence project. In practice, we are nowhere near deployable AI agents in clinical settings. It’s laughable.
Microsoft is walking straight into the same trap as Hippocratic AI. Instead of doing real engineering work—forgetting GPUs for a minute and actually building algorithms grounded in math (hello, does anyone use math anymore?)—they’re just remixing large language models and throwing compute at the wall hoping something sticks. You cannot be serious.
Now, to be clear, companies like Hippocratic and Nabla should absolutely be scared of Big Tech. Microsoft, Google, Amazon—they can undercut anyone, anytime, just because they feel like it. Munjal Shah at Hippocratic doesn’t give a shit, of course. He’s already driven companies into the ground. What’s one more? It’s not his money anyway. 😏
Alex Lebrun at Nabla, on the other hand, is different. He’s smart. Academic. He probably sees the threat and feels the pressure. But I bet he’s also intellectually curious. If Microsoft or anyone else actually made a meaningful leap in healthcare AI agents, he’d be the first to read the paper and think, “Damn, we’re getting somewhere.”
So let’s dissect Microsoft’s “Path to Medical Superintelligence” and their laughable attempt at healthcare AI agents. Like no one else has. Buckle up. 😏
TL;DR:
1. Sequential Diagnosis Is Bullshit. For Now, at Least.
2. AI Agents Talking to AI Agents About Humans—What Could Go Wrong, Really? 😏
3. Benchmark Gaming 101
4. Want to Save on Diagnostic Costs? Use a Really Old Model. 😏
5. Conclusion: AI Agents Are the Future—But It’s Unethical to Pretend the Future Is Already Here
Keep reading with a 7-day free trial
Subscribe to AI Health Uncut to keep reading this post and get 7 days of free access to the full post archives.