Mayo Clinic + Abridge: Not FDA Approved!
126 AI scribe tools are currently being sold to healthcare providers. Not one of them has FDA approval. Not a single one!
Welcome to AI Health Uncut, a brutally honest newsletter on AI, innovation, and the state of the healthcare market. If you’d like to sign up to receive issues over email, you can do so here.
Abridge announced today that it will expand its Mayo Clinic partnership to “2,000+ clinicians” and “1.3 million patients.”
I’ve praised some AI ambient scribe solutions (many doctors swear by them!) while calling out “AI tourists” like Suki.
But here’s the thing: Whether you’re playing by the rules or not, none of the 126 AI medical scribe solutions currently on the market has FDA approval. (And, in my estimation, 122 of those 126 are knockoffs.)
Check for yourself on the FDA’s list of 1,016 AI/ML-enabled medical devices—it’s all there in black and white.
Right now, these companies are either exploiting two existing FDA exemptions—510(k) or 513(g)—or simply “winging it,” banking on nobody noticing.
Dr. Terence Tan has compiled an exhaustive list of these tools—give it a look. Spot anything you use? Spoiler alert: It’s probably not 100% kosher.
Dr. Tan draws an intriguing parallel between the Cambrian Explosion in biology and the hype of AI scribes in healthcare. The Cambrian Explosion, occurring over 500 million years ago, was a pivotal event during which nearly all major animal groups emerged in the fossil record. This evolutionary burst spanned only 13-25 million years—a blink of an eye in geological terms.
The analogy feels apt: just as the Cambrian Explosion gave rise to a diversity of life forms, today’s AI scribe market is experiencing its own burst of rapid growth (and imitation). Most of these apps, however, appear to be little more than derivative works—copycats or shiny extensions of foundational large language models (LLMs)—lacking the differentiation needed to truly evolve.
Anyway, in its announcement, Abridge mentioned, “Since July 2024, Mayo Clinic, Abridge, and Epic have been collaborating on AI solutions tailored to the fast-paced and unique nursing workflow.” Sure, it’s better than replacing nurses outright. Right, Hemant Taneja? (Yes, the guy who bought a hospital by borrowing a sweet billion.) And how about Hippocratic AI’s $9/hour AI nurses—or should I say Hypocritical AI? Stay tuned next week for my investigation into their operations, complete with interviews from ex-employees and vendors. Trust me, it’s going to get spicy.
Oh, and speaking of Mayo Clinic, any updates on their testing of Google’s Med-PaLM 2 model? I’ve been itching to find out for 21 months now. 😂 In all seriousness, here’s the truth: Even before Mayo Clinic started testing the so-called state-of-the-art Med-PaLM 2, a wave of new foundational models hit the scene and completely annihilated it. Now, Google’s Med-Gemini has come and gone, leaving Med-PaLM 2 as century-old news.
Until then, a public service announcement: If you’re buying AI solutions from unapproved street vendors, tread carefully out there. You’ve been warned. 😉
I’ve included the following discussion to address skepticism from some readers regarding my claim that ambient AI scribing tools require FDA approval. The pushback often centers on the argument that these tools “aren't diagnosing anything.”
FDA Approval for AI Scribes Isn’t Required, But It Might Be the Right Thing to Do
I never claimed that AI scribing requires FDA approval, nor did I suggest that these companies are acting illegally. However, I did say, “It’s probably not 100% kosher,” and I stand by that.
That last comment may have been provocative, but there’s truth in it.
Exploiting a loophole just because it’s available—and widely used—doesn’t necessarily make it the right thing to do.
I’m not an FDA attorney, but I’m familiar with the approval process, having navigated it firsthand with my company, WellAI.
Why You Might Need FDA Approval for Class I or Class II Medical Devices—Even If They’re “Not Diagnosing Anything”
AI ambient medical scribes, even when not directly involved in diagnosis, may still require FDA clearance (via either 510(k) or De Novo pathways) due to their potential impact on patient care, medical documentation, and clinical workflows. Here are the possible reasons:
1. Indirect Impact on Clinical Decisions
Even if the scribe does not directly diagnose or treat, its outputs (transcriptions, summaries, or suggestions) may influence clinical decisions. For example:
Potential Risk: If the AI inaccurately summarizes a patient’s history or misrepresents critical information—and trust me, it will, because every AI scribe company uses GenAI, and GenAI is inherently prone to hallucination—it could potentially result in errors in diagnosis, treatment plans, or medication prescriptions. (GenAI, short for Generative AI, is typically a transformer-based AI that generates content like text, images, or music by learning patterns from large datasets.)
FDA Justification: Tools that affect clinical decision-making indirectly are subject to regulation because they can pose a risk to patient safety.
2. Claims of Reducing Errors or Enhancing Clinical Outcomes
If the AI scribe is marketed with claims such as:
“Improves patient safety by minimizing documentation errors.”
“Reduces risk of incomplete or inaccurate patient records.”
Such claims position the tool as having a medical purpose rather than being a simple administrative tool, which could trigger FDA oversight.
3. Integration with Electronic Health Records (EHRs)
If the scribe integrates with EHR systems and automates tasks like:
Populating structured fields in the EHR.
Flagging discrepancies or missing information in clinical documentation.
These functions may require FDA clearance because the tool is interacting with regulated medical data systems that directly affect patient care.
4. Handling Protected Health Information (PHI) and Cybersecurity Risks
AI scribes handle sensitive PHI. Any vulnerabilities in:
Data storage.
Transmission security.
Risk of unauthorized access.
These could lead to patient harm, triggering FDA cybersecurity requirements for software regulated as a medical device.
5. Real-Time Decision Support Features
If the AI scribe includes real-time assistive features, such as:
Suggesting medical terms or coding.
Highlighting inconsistencies or incomplete documentation.
These functionalities could be viewed as offering decision support, even if indirectly, and may require FDA oversight to ensure they function safely and effectively.
6. Adaptive Learning and Self-Modifying Behavior
If the scribe employs machine learning algorithms that adapt based on user interactions, the potential for unintended changes in functionality (e.g., introducing new transcription errors) could pose safety risks. The FDA regulates such “adaptive” software to ensure modifications do not adversely impact performance.
7. Risk of Bias in Outputs
If the AI scribe exhibits systemic bias in transcribing or summarizing medical information:
Misinterpreting accents, dialects, or languages.
Underrepresenting certain symptoms or conditions based on demographic data.
These biases could harm specific patient populations, prompting regulatory scrutiny.
8. Post-Market Surveillance Requirements
The FDA may require tools with significant implications for patient care to include a post-market surveillance plan:
Tracking adverse events (e.g., transcription errors leading to patient harm).
Monitoring tool performance in real-world scenarios.
If the scribe tool lacks such provisions, the FDA may regulate it to ensure ongoing safety and effectiveness.
9. Special Controls for Safety
If the AI scribe operates in a high-stakes environment (e.g., emergency departments or critical care settings), it may require FDA oversight to ensure:
Performance accuracy under stressful or noisy conditions.
Clear labeling about limitations (e.g., “not suitable for use in acute care decision-making”).
10. Regulatory Precedent for Similar Devices
When one day a similar device (predicate) is cleared through 510(k) or De Novo, the FDA may require any new AI scribe to adhere to the same regulatory pathway.
Scenarios Where FDA Clearance May Be Needed for AI Scribes
Claim: “Our AI scribe ensures error-free documentation for better patient outcomes.”
Why Regulated: This suggests clinical impact.
Functionality: Real-time flagging of drug-allergy conflicts in dictated notes.
Why Regulated: This involves decision support.
Integration: Auto-populating medication lists in an EHR.
Why Regulated: Directly interacts with clinical workflows.
Adaptive Learning: Continuously updating to improve transcription accuracy.
Why Regulated: Potential safety risks from untested updates.
Why Don’t AI Scribe Companies Just Get FDA Approval and Be Done with It?
Here’s why.
For early-stage startups, the FDA approval process is prohibitively expensive—financially, in terms of human capital, and time-wise. It’s much easier (and cheaper) to claim an FDA exemption if one is available. And with AI medical scribes, exemptions are clearly on the table.
For mature companies, the path isn’t any smoother. If there’s no equivalent (predicate) legally marketed device, they’re stuck navigating the FDA’s De Novo Classification Process. Alternatively, if there is a predicate device, they face the 510(k) Submission—a process that still demands rigorous evidence and regulatory hurdles.
In both scenarios, assuming the FDA does its job diligently (and there’s no reason to doubt it), the risk of denial is non-trivial. For companies whose core business revolves around a single product, that’s a gamble that could prove catastrophic.
So, what’s the logical choice? AI scribe companies routinely plead the Fifth and opt for the exemption. It’s a loophole, but it’s also the safest and least costly route—a calculated decision in a high-stakes game.
👉👉👉👉👉 Hi! My name is Sergei Polevikov. I’m an AI researcher and a healthcare AI startup founder. In my newsletter ‘AI Health Uncut,’ I combine my knowledge of AI models with my unique skills in analyzing the financial health of digital health companies. Why “Uncut”? Because I never sugarcoat or filter the hard truth. I don’t play games, I don’t work for anyone, and therefore, with your support, I produce the most original, the most unbiased, the most unapologetic research in AI, innovation, and healthcare. Thank you for your support of my work. You’re part of a vibrant community of healthcare AI enthusiasts! Your engagement matters. 🙏🙏🙏🙏🙏