German Doktors Don't Go to Jail. German Engineers Do.
Healthcare AI may be the lone flickering candle in Europe's haunted house of a lost decade, casting a dim glow over an otherwise chilling future. 👻
Welcome to AI Health Uncut, a brutally honest newsletter on AI, innovation, and the state of the healthcare market. If you’d like to sign up to receive issues over email, you can do so here.
🎃 Happy Halloween, American style! 🎃
In the spirit of costume hunting, I asked ChatGPT for the top 10 scariest Halloween costumes. 🧛 Can anyone spot a theme in this list?
All in good fun, right? But here’s what’s actually hair-raising: the way venture capital creeps into healthcare. 💀 With the launch of the “clinician only” Scrub Capital, I’m hopeful this is one VC that will break the mold. Here’s to them putting community outcomes on the same pedestal as financial returns and standing with the medical community over the usual VC and tech bros. 🥂
One of my most viral Substack articles is titled “Doctors Go to Jail. Engineers Don’t.” (Here’s a lesson for you, kids: people love a catchy title more than they love deep content. Just kidding. It was a rigorously researched article, and I appreciate your support!)
In that article, I presented evidence that American doctors bear full responsibility for clinical decisions—even if an AI error causes the harm. I’m not arguing to shift liability entirely to AI tools. Rather, we need defined rules and best practices for AI in healthcare.
Ironically, despite Europe’s “lost decade” of stagnant growth—thanks to overbearing regulations, excessive taxes, and a populist backlash that’s hindered access to affordable immigrant labor—there may be one glimmer of hope the continent can be proud of.
I’m sure most of you saw the latest Der Spiegel cover, featuring an adorable big-eyed AI robot reminiscent of WALL-E. For those who missed it, here’s a quick recap, as I think there’s a lesson or two we can pick up from Europe. (Spooky, I know, but hey, it is Halloween! 👹)
It’s quite a rosy article about how Germans are embracing AI in clinical settings. The main character of the Der Spiegel article is “Pepper,” a cute robot pictured above, used in healthcare environments like the emergency center at Charité Hospital in Berlin. Pepper is designed to assist patients in hospital environments, acting as a friendly and approachable robot with large eyes and a white plastic exterior, capable of moving on three wheels. Standing about 1.2 meters tall and equipped with sensors, it can measure basic health metrics such as blood pressure and body temperature. In medical settings, Pepper serves as a reception robot, interacting with patients in multiple languages, collecting initial information, and performing basic health checks. The robot’s purpose is to relieve medical staff by handling routine, repetitive tasks, allowing healthcare professionals to focus on critical patient care. Whether a patient speaks German, Turkish, or Arabic, Pepper is designed to understand each language and medical detail thanks to artificial intelligence (AI).
Pepper is just one example of how AI is becoming integrated into patient care across Germany. Over 500 AI systems are already in use in clinics and practices, diagnosing broken bones, detecting tumors, assisting in surgeries, and drafting medical reports.
According to Der Spiegel, AI systems in medicine are becoming more sophisticated, taking on increasingly complex tasks. They aim to reduce misdiagnoses, avoid unnecessary procedures, and potentially save thousands of lives each year. By detecting hidden cancerous areas or identifying heart disease early, AI has the potential to revolutionize healthcare. The global market for AI in healthcare is expected to reach nearly $500 billion by 2032.
AI’s integration into medicine is transforming the field. Large language models (LLMs) can analyze data and provide answers that seem human-like, and they’re increasingly capable of handling complex medical tasks. For example, at Essen University Hospital, the Boneview program, developed by French company Gleamer, improves the accuracy of fracture diagnosis among medical residents. Initial studies show that with AI assistance, inexperienced doctors can identify fractures with a 77% accuracy rate. AI-powered tools are also being used in surgeries. At Waldkliniken Eisenberg in Thuringia, AI-enabled robots assist with spine surgeries, enhancing precision and reducing operating times.
In fields like rheumatology, where the doctor shortage in Germany is especially severe, AI can play a critical role in providing patients with essential information. A study led by Isabell Haase found that ChatGPT’s responses to common questions about lupus were more informative than expert responses posted on the lupus100.org website.
For medical documentation, the University Medical Center Hamburg-Eppendorf (UKE) recently introduced an LLM that drafts patient reports within minutes. This model, Argo, draws on over seven million patient cases and generates summaries for doctors to review and finalize, boosting productivity and freeing up time for direct patient care.
Der Spiegel’s article concludes that future medical AI systems, often referred to as “Generalist Medical Artificial Intelligence,” will assume broader responsibilities—from monitoring intensive care patients to assisting in surgeries and managing medication. Robots like the reception robot at Charité may soon become commonplace in hospitals, enhancing both efficiency and patient care.
Yet, even beyond the rosy view of this seemingly ideal AI paradise, there remains much to learn about AI adoption in Europe.
AI adoption in healthcare has shown notable momentum in Germany and across Europe, often attributed to a blend of regulatory frameworks, healthcare infrastructure, and legal protections that differ significantly from those in the U.S. Doctors in Germany may experience a layer of protection that mitigates their liability regarding AI errors compared to American doctors.
European regulatory frameworks indirectly reduce clinicians’ liability risk by placing greater responsibility on AI manufacturers and healthcare institutions rather than on individual doctors. Key elements include:
1️⃣ EU Medical Device Regulation (MDR): At the recent HLTH conference in Las Vegas, FDA Commissioner Robert Califf, MD implied that the FDA has minimal involvement in AI safety, emphasizing that it’s the responsibility of health systems to “conduct continuous local validation of artificial intelligence tools to ensure they are being used safely and effectively.” In stark contrast, the MDR, enacted by the European Union in April 2017, takes a radically different approach, enforcing rigorous oversight on AI safety and validation standards. The MDR classifies most healthcare AI as a “medical device,” requiring stringent testing, validation, and continuous monitoring before implementation. This shifts much of the responsibility for AI accuracy and reliability to manufacturers. By holding AI providers to high standards, the MDR indirectly lowers the risk for doctors relying on these technologies, though it does not eliminate liability entirely. However, the MDR places substantial responsibility on AI developers to guarantee accuracy and safety, reducing the burden on individual doctors to validate AI outputs independently. When errors occur, the liability typically falls on manufacturers and developers due to the MDR’s strict regulations, offering doctors a buffer from direct accountability.
2️⃣ Product Liability Directive (PLD): Under the PLD, manufacturers can be held liable for defective products, including AI systems, if the AI causes harm. This means liability for damages caused by AI failure may rest more heavily on developers and vendors. While this directive doesn’t explicitly protect doctors, it reinforces manufacturers’ accountability.
3️⃣ Professional Standards and Institutional Policies: In Germany, doctors are expected to adhere to best-practice protocols and institutional guidelines, including those for using AI systems. By following these protocols, doctors can demonstrate they used AI responsibly and within accepted practices, potentially protecting them from liability in case of AI-related errors.
4️⃣ European Union’s AI Act (The EU AI Act of 2024): This brand new law requires risk-based classifications of AI, particularly in healthcare, with stricter regulations for “high-risk” systems, which align closely with the conformity assessment procedures under the MDR. Although it doesn’t provide direct immunity for doctors, the law aims to ensure that healthcare AI tools are developed to rigorous safety standards, indirectly reducing the chances that doctors will face liability when using AI responsibly.
5️⃣ The EU’s AI Liability Directive: The EU’s AI Liability Directive aims to modernize liability rules to include AI systems, ensuring that victims of AI-related harm have the same level of protection as those harmed by other technologies. This directive introduces a rebuttable presumption of causality, easing the burden of proof for victims. (Source: The European Commission’s AI Liability Directive.)
In sum, while no specific law or regulation in Europe directly absolves doctors from AI liability, the MDR, PLD, the EU AI Act, the EU’s AI Liability Directive, and adherence to professional standards create a framework where the liability burden is shared, largely shielding doctors if they use AI within established protocols and institutional guidelines.
Additionally, Germany’s healthcare system, with its emphasis on public health and safety, creates an environment where AI applications are carefully vetted and often integrated into standardized protocols. This minimizes the likelihood that individual practitioners will be held solely responsible for AI errors, as the technology’s usage is typically standardized and overseen by institutional policy.
In the U.S., however, doctors may face greater liability. U.S. malpractice laws often do not yet clearly distinguish between errors originating from AI recommendations and those made by physicians, leaving doctors vulnerable to malpractice suits even if the AI tool contributed to the error. While AI-specific legislation is in development, the current regulatory landscape in the U.S. places heavier responsibility on doctors to critically oversee AI outputs, as courts could hold them accountable for errors resulting from “blind trust” in technology.
Health tech companies, hospitals, and even the American Medical Association (AMA), whose primary mission is to advocate for physicians and patients, have shifted much of the burden onto physicians. They argue that “doctors making the final call in care are ultimately responsible for their decisions”. (Source: Politico.) The OECD, WHO, FDA, and AAFP are silently nodding in agreement. (Source: Polevikov S. Advancing AI in healthcare: A comprehensive review of best practices. Clin Chim Acta. 2023 Aug 1;548:117519. doi: 10.1016/j.cca.2023.117519. Epub 2023 Aug 16. PMID: 37595864.) To add insult to injury, the AMA has been playing both sides, flip-flopping on AI liability. They argue that doctors should have limited liability, yet still insist that physicians are the ones holding the bag as the final decision-makers. (Sources: AMA AI Principles, Politico.)
“Physicians [in the U.S.] are the ones that kind of get left holding the bag.” —Michelle Mello, a Stanford health law scholar.
In addition to the uncertainty surrounding AI liability, 81% of U.S. clinicians believe AI could “erode critical thinking,” and 77% report that AI tools have made them less productive while increasing their workload. (Sources: The Deep View Report, The Register, Dr. Jeffrey Funk on LinkedIn.)
Conclusion
Over the past decade, the United States has led the world in innovation and economic growth, but healthcare has been another story. In contrast, Europe—particularly Western Europe—has experienced a “lost decade” economically, yet they’ve surged ahead in AI adoption in healthcare. Their frameworks around responsibilities and liabilities for deploying and using AI in healthcare are setting standards that the U.S. is struggling to match.
Wouldn’t it be nice to have the U.S. economy with German healthcare? 😉
👉👉👉👉👉 Hi! My name is Sergei Polevikov. In my newsletter ‘AI Health Uncut’, I combine my knowledge of AI models with my unique skills in analyzing the financial health of digital health companies. Why “Uncut”? Because I never sugarcoat or filter the hard truth. I don’t play games, I don’t work for anyone, and therefore, with your support, I produce the most original, the most unbiased, the most unapologetic research in AI, innovation, and healthcare. Thank you for your support of my work. You’re part of a vibrant community of healthcare AI enthusiasts! Your engagement matters. 🙏🙏🙏🙏🙏