Painful AI Adoption in Medicine: 10 Obstacles to Overcome
9 out of 10 hurdles are technical. But the biggest roadblock? It's 100% human.
You’ve probably been reading AI Health Uncut for a while now and occasionally think to yourself, “I like what this guy writes. I should sign up for a paid subscription. I’ll do it next time.” Well, this issue should convince you to sign up now. Not only do I believe the topic of this article is of utmost importance to U.S. digital health, but you’d also be supporting my future research on AI and digital health. God only knows it doesn’t come easy to me. It takes time and effort. But I’ll keep going because I’m passionate about fixing American healthcare. Please support my effort. Please consider becoming a paid subscriber.
I don’t mean to distract anyone from the Paris Olympics, where suddenly, everyone—from Snoop Dogg and Martha Stewart to Flavor Flav and Colin Jost—becomes an expert in pummel horse, scrum, drag flick, and equestrian dressage. 🏉🏑🐴😊 But let's shift our focus to something equally important: the future of healthcare.
Let’s get serious. In this article, I explore ways to enhance AI models in healthcare, drawing from my own experiences. But here’s the key:
No amount of technical wizardry can surpass the critical factor of AI acceptance by the medical community.
I’m freaking tired of hearing, “Let’s not jump to conclusions yet. Lives are on the line.” Enough of that nonsense. Lives are on the line either way. But now, increasingly, we are losing patients in America because we’re not leveraging AI in healthcare as we should.
When I posted an article back in December 2023, right after the congressional hearing on AI in healthcare, I asked, “What if a Physician Doesn’t Use AI and Something Bad Happens?” I was ridiculed and confronted with, “What if a Physician Does Use AI and Something Bad Happens?”
Fast forward 8 months, and I’m more certain than ever—I was asking the right question.
If you’re a physician, especially a PCP, and you’re not using AI or considering AI for diagnostic and treatment “second opinions”, you’re risking your patients’ health.
There, I said it. Hate me or not, that’s the truth. I’m not here to make enemies, but I’m also not here to coddle anyone.
And please don’t tell me, “Oh, but I’ve been using Nuance Dragon (or DeepScribe, or whatever) for years now.”
Sorry, AI scribing doesn’t count. That’s like saying, “I’ve been using a stethoscope since med school.”
AI ambient transcribing is about automation. I’m talking about decision-making.
If you’re a doctor and you’re not leveraging AI, you’re putting your patients at risk. Period.
Private hospitals have been claiming they’re “ready” for AI for ages. (Source: CB Insights.) So what the hell are they waiting for? The Mayo Clinic and HCA have been “testing” Google’s Med-PaLM 2 for over a year now. Where are the results? Seriously. We need real clinical evidence, not some academic claims of “99% accuracy” on irrelevant benchmarks. Give me a break.
Well, these are rhetorical questions. Why? Because there’s a massive disconnect between what hospital execs claim and what physicians actually want. The suits are eager to act because they’ve inked a lucrative deal with Google. But the doctors? They feel like these AI models are being shoved down their throats. Their reputations are on the line, and they’re not at all comfortable with this tech. In this article, I diligently explain the reasons why.
When it comes to AI products in healthcare, I’m not shilling for any particular model here. My Substack isn’t a billboard or a PR agency. But let’s be real, there are some impressive ones out there, all pushing towards something transformative. (I did however cover the good, the bad, and the ugly in my recent digital health overview.)
I want to make another important point. I talk to medical professionals every day, and those are probably more open to AI than most. (Those negative towards AI likely wouldn’t talk to me. 😂) But here’s the thing: I’ve yet to meet a physician or a medical practice owner considering replacing front office staff, let alone physicians and nurses. Not even remotely. The current battle in America is to provide much-needed relief to overburdened and burned-out medical personnel.
And I truly believe that much-needed relief will come, and is already coming, from AI.
Alright, let’s dive into how we can improve AI models in healthcare and ensure that every doctor in America uses AI as a “second opinion” provider.
Here are the 10 key areas we need to address to improve AI in healthcare:
[I’m putting the rest of the article behind the paywall to honor my paid subscribers and Founding Members, and to encourage others to become paid subscribers. As I mentioned before, if you’re a student or can’t afford a subscription, please reach out. I have a few free 12-month subscriptions available thanks to my amazing Founding Members. If you want to provide free subscriptions to 5 students and support my work (God knows how much time I put into this), I encourage you to become a Founder Member.]
Keep reading with a 7-day free trial
Subscribe to AI Health Uncut to keep reading this post and get 7 days of free access to the full post archives.