What if a Physician Doesn't Use AI and Something Bad Happens?
That's the question that was asked (twice) at various points during last week's congressional hearing "Understanding How AI is Changing Health Care".
What if a Physician Doesn’t Use AI and Something Bad Happens?
That’s the question that was asked (twice) at various points during last week’s congressional hearing “Understanding How AI is Changing Health Care”. The witnesses’ responses were actually pretty good. Answering technical AI questions, on the other hand, was an absolute disaster.
So, what have we learned from last week’s House of Representatives Health Subcommittee Hearing: “Understanding How AI is Changing Health Care”?
Honestly, not much. I’m biased (no pun intended), but based on the fact that the overwhelming majority of the committee questions were focused on AI biases, data privacy in healthcare, AI ethics, AI transparency, AI safety, and the responsible use of AI in healthcare, the committee members would have learned much more from my recent paper “Advancing AI in healthcare: A comprehensive review of best practices”. Seriously. Not a single witness was an expert on best practices for AI in healthcare.
I’m not saying the witnesses weren’t experts. They were, except, in my opinion, for the “suit” from Siemens, who seemed like an odd one out. (I have to clarify, I’m talking here about this specific event. I wouldn’t be surprised if he is a very nice man, in general.)
An Embarrassing Moment in Congress for Siemens Healthineers.
My concern is that Congress seems unable to grasp that bringing C-suite executives and asking them technical questions about supervised, unsupervised, reinforcement learning, and generative AI models is not effective. The committee should have known better than to expect specific answers to specific questions from “suits”. Typically, their responses are pre-set, general, and often fail to address the question directly.
In the committee’s defense, when your position states “Head of Digital Health”, it implies you should know something about AI applied to health.
At 1:42:51 in the hearing:
Rep. Gus Bilirakis:
“Mr. Shen, can you tell us about the role of generative AI, what it is, and what its potential can be within the health care sector?”
Mr. Peter Shen, Head of Digital Health – North America, Siemens Healthineers:
“With generative AI here, we see the greatest potential in the ability for the AI to consume information about the patient themselves. So, when a patient goes to get an exam for a diagnosis, leveraging generative AI can help identify precisely what diagnosis should be looked for. Another area where generative AI benefits medical imaging is in interpreting the images themselves. It can translate complicated medical language into layman’s terms for the patient, helping them better understand the test results from their exam.”
Just based on this statement alone, you should be asking yourself, “Does Siemens actually know what they’re doing in AI?” But then again, this person doesn’t do the actual AI work. He is a “talking head”. I’m sure people who actually do the AI work at Siemens know what they’re doing.
However, it was a super embarrassing statement from someone representing a respected company.
Mr. Shen didn’t know what Generative AI was (oops), never defined it, even though the congressman asked him to, and pretended to be someone he was not.
Most importantly, suggesting to use Generative AI for medical diagnostics is dangerous and irresponsible.
Sam Altman of OpenAI once said that ChatGPT hallucinations are part of their “magic.”
“The fact that these AI systems can come up with new ideas and be creative, that’s a lot of the power.”
—Sam Altman
Creativity - yes. Medical diagnosis - are you kidding me? Mr. Shen is saying, “[with generative AI], the patient gets a better understanding of what’s going on in the test results”. Have you used ChatGPT or any other Large Language Model (LLM) chatbot? Generative AI models in their current form lie (aka hallucinations), are highly inconsistent (aka you may get a different answer to the same question), and highly inaccurate.
Mr. Shen’s statement was a major embarrassment, reflecting a lack of understanding of generative AI. It’s crucial for representatives of respected companies to be more informed, especially in high-stakes fields like healthcare.
I hope Mr. Shen corrects his statements.
The incident highlights a broader issue: why not invite actual AI experts to these hearings instead of their higher-ups in suits? Honesty and humility, like admitting a lack of specific knowledge, would be refreshing and respectable in such a context.
Why can’t someone for once be honest and say, “Congressman, I don’t know the answer to this question, even though my title implies I should know. I yield my time to someone who knows.”
Wouldn’t that be one of the coolest moments in Congress history? I would have immense respect for such a person, one who chooses honesty over pretense, instead of providing misleading information to Congress and the American public.
I’m sure Mr. Shen is a nice man. But all I’m asking is to set the record straight.
Despite this embarrassing moment for Siemens Healthineers, the rest of the hearing offered some interesting insights, though nothing particularly new.
I’ve listened very carefully to the whole hearing. Here are my two takeaways from this congressional hearing:
1. Physicians are the ultimate decision makers, not AI.
Everyone at the hearing agreed that, at least for the foreseeable future, physicians are the ultimate decision makers. They may use AI or anything else to make those decisions more informed. But physicians are responsible for the medical decisions. That was definitely the consensus in the room.
I thought Michael Schlosser, M.D., MBA of HCA Healthcare, said it best (at 2:29:25 in the hearing):
“I think that we need to remind ourselves that healthcare decisions are made by physicians and practitioners that they should be the ultimate decider when it comes to coverage, when it comes to do you need to be admitted to the hospital, what treatment do you need - these need to be made by our trained healthcare physicians. I think it’s important that we understand that as a community, as an industry that we’re not turning over decision-making. These [AI models - S.P.] are tools in their tool belt that and we need to view them as such, not as an authoritative decision that you know that someone should be held accountable to.”
—Michael Schlosser, M.D., MBA
2. Concern among Congress people that physicians don’t use AI enough.
This one was completely unexpected. Policymakers are concerned about the risk and liabilities of those physicians who don’t use AI to treat patients. Unexpected, but I’m glad it was brought up. When we analyze the uses of AI in healthcare, the response should be symmetric, in my view: Risks and benefits when medical professionals use AI, and, equally important, risks and benefits when medical professionals refuse to use AI.
Here is the exact interaction from the committee hearing, for the record:
At 2:28:53 in the hearing:
Rep. Earl L. “Buddy” Carter:
“How is that going to impact the practice of medicine if a physician doesn’t use AI and then something happens and then all of a sudden they’re sued because you didn’t use something that was available that you should have used?”
David Newman-Toker, M.D., Ph.D., Director, Division of Neuro-Visual and Vestibular Disorders, Department of Neurology, Professor of Neurology, Johns Hopkins University School of Medicine:
“If we can prove that AI systems save lives then people should be using them and if we can’t then we should be relying on clinician judgment.”
At 2:45:10 in the hearing:
Rep. Diana Harshbarger:
“Do you see a scenario where litigation might increase if doctors don’t utilize AI?”
Christopher Longhurst, M.D., Chief Medical Officer, Chief Digital Officer and Association Dean, UC San Diego Health:
“A recent Boston Globe survey of patients asked uh would you uh see a doctor that was not using AI and the predominant answer from patients was I would be concerned if my doctor was not using the latest tools. So as was recently described by Dr. David Newman-Toker if these tools are shown to be best practice, if they can decrease mortality, if they can increase survivorship then they will become a best practice that should be used in every case.”
My Take
AI in medicine is like teenage sex: everyone talks about it, few really know how to do it, everyone thinks everyone else is doing it, so everyone claims they are doing it. The truth? It’s a nascent field with as much potential for harm as for good.
Yes, per Sam Altman, AI can be magical, but in healthcare, we need more than magic. We need precision, accuracy, and reliability. The thought of using generative AI in medical diagnostics is as absurd as using a Magic 8-Ball for brain surgery. It’s not just irresponsible. It’s a gamble with human lives.
And let’s talk about Congress and their dance with AI. It’s like watching your grandparents trying to use TikTok - it’s well-intentioned but painfully misguided. The focus should be on educating themselves and bringing in real experts, not just suits with fancy titles. The “cool” moment in Congress won’t come from someone admitting their ignorance. It’ll come when they actually understand what they’re talking about.
As for physicians being the ultimate decision-makers, that’s a no-brainer. AI is a tool, not a replacement for human judgment. The concern about physicians not using AI is like worrying about a carpenter not using a specific brand of hammer. Tools should aid, not dictate.
The House hearing was a mix of face-palming moments and eye-opening insights. It’s high time we stop treating AI like a panacea and start understanding it as a tool - a powerful one, but just a tool. The real intelligence, especially in healthcare, should be human.
👉👉👉👉👉 Hi! My name is Sergei Polevikov. In my newsletter ‘AI Health Uncut’, I combine my knowledge of AI models with my unique skills in analyzing the financial health of digital health companies, no pun intended. Why “Uncut”? Because I never sugarcoat or filter the hard truth. Support my work and join a vibrant community of healthcare AI enthusiasts by subscribing to ‘AI Health Uncut’! Your engagement matters. 🙏🙏🙏🙏🙏
Using Generative AI for medical diagnostics is dangerous and irresponsible. The AI companies should have a visible disclaimer everywhere. But because they don't, even the so-called "AI experts" are being confused.
If AI experts, like Peter Shen, Head of Digital Health – North America at Siemens Healthineers, make such egregiously erroneous statements, what can we expect from the users of AI?
In an interview with Bill Gates on January 11, 2024, Sam Altman stated that OpenAI had to put its robotics project on hold. Here is how he explained it: "Creative work, the hallucinations of the GPT models, is a feature, not a bug. It lets you discover some new things. Whereas if you're having a robot move heavy machinery around, you'd better be really precise with that." (https://www.youtube.com/watch?v=PkXELH6Y2lM)
Per Sam Altman's own admission, generative AI can be magical, but in healthcare, just like in robotics, we don't need magic. We need precision, accuracy, and reliability.