Interesting post. Yes, the new hires are vulnerable, but one of them worked in the last Trump administration, so we’ll see. I wish, though, that you had covered more deeply the issue of CHAI and regulatory capture. There are lots of people in this space who are experienced healthcare folk who disagree with CHAIs approach. Lastly, it’s important to remember the scope of ASTP/ONC regulatory authority, which is certified EHRs. They don’t regulate AI developed by health systems when that AI is not for sale to others, and I suspect their authority over AI developed by health systems for sale, or over providers who use their own data stores to develop AI outside of a certified EHR would certainly be questionable under Loper Bright.
And of course, a manufacturer of AI that is a device would certainly be liable under FDA standards.
I think it’s good the PSOs are involved—that’s the only way to elicit accurate information so improvements can be made, but the staffs at the PSOs will have to up their tech knowledge significantly
Wow! Your insights are incredibly valuable. Thank you!
I understand the regulatory scope of ASTP/ONC is limited. I do know folks there had some influence on AI-related executive orders during the Biden administration. But outside of that and the EHR certification, as you mentioned, I’m not sure.
I may not agree with everything Micky has done, but I always thought his approach was much more academic and diligent than most of his predecessors. I really appreciated that. We’ll see what happens with the next head of that agency.
As I wrote before, despite the FDA proposals - including the latest one just a couple of weeks ago - not a single AI tool in healthcare utilizing foundational models like LLMs has been approved by the FDA. We can debate whether they need approval or not, but that’s the reality.
And thank you for your support of my humble publication! I’m a one-person operation, so the support of folks like you - who care about our common goal of fixing healthcare - is deeply appreciated.
Interesting post. Yes, the new hires are vulnerable, but one of them worked in the last Trump administration, so we’ll see. I wish, though, that you had covered more deeply the issue of CHAI and regulatory capture. There are lots of people in this space who are experienced healthcare folk who disagree with CHAIs approach. Lastly, it’s important to remember the scope of ASTP/ONC regulatory authority, which is certified EHRs. They don’t regulate AI developed by health systems when that AI is not for sale to others, and I suspect their authority over AI developed by health systems for sale, or over providers who use their own data stores to develop AI outside of a certified EHR would certainly be questionable under Loper Bright.
And of course, a manufacturer of AI that is a device would certainly be liable under FDA standards.
I think it’s good the PSOs are involved—that’s the only way to elicit accurate information so improvements can be made, but the staffs at the PSOs will have to up their tech knowledge significantly
Wow! Your insights are incredibly valuable. Thank you!
I understand the regulatory scope of ASTP/ONC is limited. I do know folks there had some influence on AI-related executive orders during the Biden administration. But outside of that and the EHR certification, as you mentioned, I’m not sure.
I may not agree with everything Micky has done, but I always thought his approach was much more academic and diligent than most of his predecessors. I really appreciated that. We’ll see what happens with the next head of that agency.
As I wrote before, despite the FDA proposals - including the latest one just a couple of weeks ago - not a single AI tool in healthcare utilizing foundational models like LLMs has been approved by the FDA. We can debate whether they need approval or not, but that’s the reality.
And thank you for your support of my humble publication! I’m a one-person operation, so the support of folks like you - who care about our common goal of fixing healthcare - is deeply appreciated.