Top 50 AI Scribes: The Brutal Truth [Part 2 of 4]
Why the hell do we need 126 medical AI scribes? This episode features the founder of an AI scribe company, the Healthcare API Guy, and two health AI researchers.
Welcome to AI Health Uncut, a brutally honest newsletter on AI, innovation, and the state of the healthcare market. If you’d like to sign up to receive issues over email, you can do so here.
I continue my deep dive into the 50 most popular AI scribes. I’m also (just slightly) reshuffling the order of the four parts in my series. What was originally supposed to be Part 4—the Digital Health Inside Out podcast summary titled “Why the Hell Do We Need 126 Medical AI Scribes?”—is actually this installment, Part 2. I guess I misestimated how quickly the podcast would drop. (Story of my life. 😉) The other two parts now become Parts 3 and 4, respectively.
In Part 1 last week, I explored the broader AI scribe landscape, analyzed adoption rates, and investigated why some clinicians enthusiastically embrace AI scribes while others remain deeply skeptical.
In this Part 2, I unpack the fiery discussion from today’s Digital Health Inside Out episode titled “Why the Hell Do We Need 126 Medical AI Scribes?” featuring Brendan Keeler, Alex Lebrun, Bobby Guelich, my co-host Alex Koshykov, and yours truly. Notably, Brendan Keeler and I engage in a vigorous yet respectful debate regarding the commoditization of AI scribes and the role—good, bad, and ugly—of venture capital.
In Part 3, I tackle critical questions such as whether AI scribes genuinely represent value-based care innovations or if they’re merely Revenue Cycle Management (RCM) tools disguised as clinical support. Additionally, I assess the real-world integration success of AI scribes within EHR systems and scrutinize their claims of saving significant time and money.
In Part 4, I finally dive deeply into a head-to-head comparison of the 50 most popular AI scribe tools. This evaluation is strictly from a user’s practical standpoint—no marketing bullshit allowed. Specifically, I compare:
Pricing
Note Accuracy & Quality
Editing Burden (time and effort required for revisions)
Integration & Workflow (ease of practical use within EHR systems)
Specialty Fit & User Trust
Anyway, the podcast episode we’ve all been waiting for is finally out! It was an absolute honor to join an extraordinary group of experts for a brutally honest discussion on AI scribes.
Please subscribe to the podcast’s YouTube channel—Alex and I (but mostly Alex) put a lot of work into these episodes because we genuinely believe conversations like these make a difference.
Here’s my TL;DR of the podcast:
1. Healthcare’s Big Lie: “90% of AI Scribes Are Just Glorified Dictaphones”
2. Healthcare Is “20 Years Behind”: Does Adding 126 AI Scribes Help or Hurt?
3. Commoditization of AI Scribes: Natural Market or Unsustainable Mess?
4. AI Scribes: Betraying Clinicians to Chase Revenue Cycle Dollars?
5. EHR Integration: Genuine Innovation or Healthcare’s Biggest Scam?
6. Liability Roulette: Doctors Go to Jail, Engineers Don’t
7. AI Regulation: Common Sense or Bureaucratic Madness?
8. FDA Approval: Lifesaver or Innovation Killer?
9. Ambient AI Agents: Healthcare’s Future or Next Dangerous Fad?
10. AI Agents Will Arrive Faster Than You’re Ready
11. AI vs. Your Doctor: Convenience Over Caution?
12. Profitability, Shiny Startups, and the Looming AI Scribe Crash
13. “Adding Garbage to Garbage”: Do AI Scribes Really Reduce Waste?
14. VC Bros or Visionaries: Who’s Pumping Money Into the AI Scribe Bubble?
15. VC Funding Fiasco: Did Digital Health Get Played?
1. Healthcare’s Big Lie: “90% of AI Scribes Are Just Glorified Dictaphones”
Alex Lebrun (Nabla): “It’s easy to do like a dictaphone and just text summarization. Maybe there’s a long tail of small tools with a thin wrapper over LLMs. If they don’t evolve, I don’t see how they survive.”
Sergei Polevikov (AI Health Uncut): “Most of these solutions are wrappers and knockoffs from foundational LLMs... under normal circumstances, that wouldn’t be sustainable.”
Brendan Keeler (HTD Health): “Not everyone needs a Ferrari, and some people needed a Prius, and some people get diluted into buying the lemon.”
2. Healthcare Is “20 Years Behind”: Does Adding 126 AI Scribes Help or Hurt?

Alex Lebrun: “20 years from now, or 10 years from now, everything may be an LLM wrapper and you could say that today every SaaS product is a wrapper around a database and maybe in the ‘70s people said every software is a wrapper over C, the language C. I think it’s a bit naive because there’s much more than just core machine learning technology.”
Sergei Polevikov: “The mere fact so many solutions exist is testament that the healthcare system is f*cked up and fragmented.”
Brendan Keeler: “We fall into this trap that we believe healthcare is uniquely disadvantaged, when actually it’s uniquely advantaged.”
3. Commoditization of AI Scribes: Natural Market or Unsustainable Mess?
Sergei Polevikov: “Most of these solutions are wrappers and knockoffs from foundational LLMs, which is fine because they’re paying for those foundational models, but under normal circumstances that wouldn’t be sustainable. The only reason they are sustainable is because the system is so f*cked up and fragmented.”
Brendan Keeler: “Every B2B software market has capability set spread. Not everyone needs a Ferrari. Some people need a Prius, and some people get deluded into buying the lemon. To imagine everyone needs deep integration ignores realities. B2B markets inherently fragment.”
Sergei Polevikov: “I disagree, because there’s no B2B market shaped exactly like healthcare’s. Under normal market conditions, you wouldn’t have 126 companies—90% using Google Voice and OpenAI under the hood—still surviving.”
Brendan Keeler: “B2B vertical software is rarely winner-take-all. Some companies hyper-focus on niche markets, and that works perfectly fine for them. The fragmentation isn’t unique to healthcare. It’s everywhere.”
4. AI Scribes: Betraying Clinicians to Chase Revenue Cycle Dollars?
Sergei Polevikov: “Clinicians are skeptical. Abridge’s CEO mentioned ‘billing’ 17 times in 38 minutes... doctors feel AI scribes add garbage to garbage, maximizing coding rather than reducing waste.”
Alex Lebrun: “If you want to make the CFO happy, you need hardcore ROI. Short-term ROI comes from revenue cycle management—RCM is so broken there’s opportunity for everyone.”
Bobby Guelich (Elion): “Higher-quality documentation could raise costs. There’s a natural tension there.”
5. EHR Integration: Genuine Innovation or Healthcare’s Biggest Scam?
Alex Lebrun: “An AI scribe not integrated with EHR is not very efficient. It’s structured data—ICD-10, coding for revenue cycle management.”
Brendan Keeler: “Every core system of record wants to own every function... the EHR market reflects onto the scribe market.”
Sergei Polevikov: “Just the fact there’s an army of intermediaries between providers and EHR companies tells me nothing is perfect.”
6. Liability Roulette: Doctors Go to Jail, Engineers Don’t
Sergei Polevikov: “Doctors go to jail. Engineers don’t. Usually, the provider is the final voice in decision-making.”
Alex Lebrun: “AI scribes are like autocomplete on steroids... clinicians ultimately submit documentation and are responsible.”
Brendan Keeler: “What’s the failure mode for co-pilots? A misdocumented note? There’s still human review.”
7. AI Regulation: Common Sense or Bureaucratic Madness?
Brendan Keeler: “We have three choices with new tech—ban it, ignore it, or pilot and regulate smartly. FDA certification is slow.”
Alex Lebrun: “Regulation will impact product development velocity. We know we’ll eventually cross the FDA line—the later the better.”
Sergei Polevikov: “Why wouldn’t Nabla get FDA approval, even for marketing reasons? The providers want a verification.”
8. FDA Approval: Lifesaver or Innovation Killer?
Alex Lebrun: “Because once you have FDA certification, forget updating your product twice a week. It’s a trade-off between agility and the FDA’s seal of approval.”
Brendan Keeler: “FDA certification is slow. Do we want to be slow here?”
Sergei Polevikov: “Clinicians don’t give a sh*t about these legal nuances. If you’re coming to me, you’d better show some verification or accreditation.”
9. Ambient AI Agents: Healthcare’s Future or Next Dangerous Fad?
Alex Lebrun: “It’s ridiculous to call for prior authorization—give me an API or my agent will call your call center.”
Brendan Keeler: “Greedy organizations will adopt clinical agents faster than we’re comfortable with, breaking prescribing and licensure laws.”
Sergei Polevikov: “AI agents in rural healthcare should happen right now. It’s important to talk to somebody rather than nobody.”
10. AI Agents Will Arrive Faster Than You’re Ready
Brendan Keeler: “Organizations will break laws using agents because government websites are down and people don’t read laws.”
Alex Lebrun: “Agents will accelerate healthcare improvement. But I’m not talking about care delivery—clinical decisions require empathy and human judgment.”
Bobby Guelich: “Consumer demand might accelerate AI care delivery faster than people expect—convenience will win out.”
11. AI vs. Your Doctor: Convenience Over Caution?
Bobby Guelich: “If my options are waiting weeks or getting a validated AI response now, I’ll choose AI.”
Brendan Keeler: “People underplay how quickly agents will be adopted in clinical settings. Infinite capacity and lower cost—people will choose it.”
Sergei Polevikov: “Doctors stay within standard-of-care due to liability risk. AI models’ advantages vanish within standard-of-care.”
12. Profitability, Shiny Startups, and the Looming AI Scribe Crash
Sergei Polevikov: “I don’t know any health AI startups actually making a profit. Amazon and Google aren’t heavily in this space because the margins aren’t there and the space is too fragmented.”
Alex Lebrun: “We are not far from being profitable... Google isn’t here not because of profits, but because of complexity and risk.”
Brendan Keeler: “Most healthcare tech isn’t profitable yet because healthcare tech isn’t a $4 trillion industry. It’s a tiny fraction.”
13. “Adding Garbage to Garbage”: Do AI Scribes Really Reduce Waste?
Sergei Polevikov: “One doctor told me AI scribes sometimes feel like adding garbage to garbage—optimized to create codes, not reduce waste.”
Alex Lebrun: “Nobody ever told me we are seeing more patients. CFOs aren’t seeing productivity—only doctors’ burnout going down.”
Brendan Keeler: “When Dr. Bobby gets pajama time back, he’s done. He doesn’t funnel it back into seeing more patients.”
14. VC Bros or Visionaries: Who’s Pumping Money Into the AI Scribe Bubble?
Brendan Keeler: “VCs invest in bullsh*t too. Their job is to be wrong 90% of the time.”
Sergei Polevikov: “Venture capitalists failed to pick winners in digital health. Pump-and-dump is not a business model for healthcare.”
Sergei Polevikov: “Companies like Suki... are not what this industry is about. In many cases, once you’re in certain VC circles, you can always get funded.”
15. VC Funding Fiasco: Did Digital Health Get Played?
Sergei Polevikov: “Asset allocation models in healthcare haven’t worked. You’re serving one group (investors) at the expense of clinicians and patients.”
Sergei Polevikov: “Glen Tullman pushing Livongo for $18B, now valued under $0.5B—great for him, terrible for the industry.”
Brendan Keeler: “VC success in digital health is hard. Actual success means durable public-market business, which hasn’t happened.”
Like what you’re reading in this newsletter? Want more in-depth investigations and research like this?
I’m committed to staying independent and unbiased—no sponsors, no advertisers. But that also means I’m a one-man operation with limited resources, and investigations like this take a tremendous amount of effort.
Consider becoming a Founding Member of AI Health Uncut and join the elite ranks of those already supporting me at the highest level of membership. As a Founding Member, you’re not just backing this work—you’re also helping cover access fees for those who can’t afford it, such as students and the unemployed.
You’ll be making a real impact, helping me continue to challenge the system and push for better outcomes in healthcare through AI, technology, policy, and beyond.
Thank you!
👉👉👉👉👉 Hi! My name is Sergei Polevikov. I’m an AI researcher and a healthcare AI startup founder. In my newsletter ‘AI Health Uncut,’ I combine my knowledge of AI models with my unique skills in analyzing the financial health of digital health companies. Why “Uncut”? Because I never sugarcoat or filter the hard truth. I don’t play games, I don’t work for anyone, and therefore, with your support, I produce the most original, the most unbiased, the most unapologetic research in AI, innovation, and healthcare. Thank you for your support of my work. You’re part of a vibrant community of healthcare AI enthusiasts! Your engagement matters. 🙏🙏🙏🙏🙏
I laughed a lot except at Alex, who clearly has deep understanding of the real potential in this space--I hope they do proverbially 'become HBO.'
Maybe a highly fragmented vertical B2B market isn't a fit for healthcare? Can convergence and 'owning the customer' ever be achieved, and how would it happen when the customer has dozens of AI vendors in addition to the behemoth contract with their EHR? We should go ask the bros at Olive AI, if we can find them...
Truly intelligent solutions would be specialty agnostic through tuning/RAG, but I think they were also highlighting sectors that aren't in traditional EHR space (home health, dentistry). I don't think those vendors belong in the same bucket.
I also didn't realize we were still questioning the revenue life cycle part as a goal--bahaha. Improved claims and higher throughput drives priority, 'soft ROI' for physician retention still counts because in markets that can't attract enough physicians (ex-urban, low cost of living areas) physician retention is a financial metric, especially for specialists.
But also: 'Pajama time' accrued back isn't just going to play with kids or 'seeing more patients,' it's going to fixing denied claims, proactively and reactively. Even if the AI-aided note doesn't directly lead to higher claims as expected, if it frees up physicians for time-sensitive, time-intensive claim resubmissions/pre-auth/etc.
Human patients are not like consumers. If you have found a partner willing to treat them like you're delivering coffee and not life-altering care, you have found yourself a major ethics violation--informed consent maybe doesn't apply to AI scribes but this attitude about healthcare innovation is self-defeating: without solid, peer-reviewed evidence gained ethically you will not scale.