My 12 Questions for Healthcare Regulators about AI Biases and Health Equity
I invite healthcare regulators to use my work on best practices as a foundation for the AI discussion, and perhaps as the basis for the AI regulatory framework in healthcare.
Healthcare is a critical domain where AI can improve lives. However, due to its importance, it’s a high-risk area where AI cannot be taken lightly.
Does AI in healthcare need to be regulated? Seems like an easy question. However, I want everyone to pause and try to imagine what exactly we are trying to regulate, and what regulation even means.
If regulation is needed, in what form should it be accomplished? A law? A set of principles? An array of ‘guardrails’? A list of guidelines?
Who will offer the AI regulatory framework in healthcare? The FDA, AMA, AAFP, Congress?
We must be cautious here. As the EU is trying to push the AI Act through (good for them), can we realistically expect the government to know exactly what AI is capable of and to stay on top of every AI advancement that seems to occur every two weeks? It’s akin to watching a bureaucratic dog chase its tail.
When government agencies show interest and concern, it can signal confidence to the industry. However, regulating every minor detail might backfire, potentially stifling the AI innovation I believe is critical for the U.S. healthcare system.
Instead, in my recent paper “Advancing AI in healthcare: A comprehensive review of best practices” I propose formulating industry best practices for AI in healthcare, rather than specific laws and detailed protocols. (To my paid subscribers, I provide the link to the full-text pdf file at the end of the article.)
I hesitate to use HIPAA as an example, but that’s all I got. 😊 Consider the 1996 HIPAA Act: it has been “the law of the healthcare land” for 27 years. Not necessarily because it’s an exceptional law, but because rather than being a stringent technical protocol, it has served more as guidance. As a medical entity or an entity partnering with one, it emphasizes the need to ensure appropriate technology safeguards for patient data, ensuring it’s secure, standardized, and portable. This is guidance, not a strict protocol.
The FDA, for instance, already provides guidance and oversight for medical devices, including software as a medical device (SaMD). This is commendable: “guidance and oversight”, not strict laws and regulations governing areas the FDA might not be fully versed in.
I invite healthcare regulators to use my work on best practices as a foundation for this discussion, and perhaps as the basis for the AI regulatory framework in healthcare. As I mention in my paper, regulators should not be ashamed to seek assistance and engage in discussions with healthcare providers, patient advocate groups, ethicists, economists, data scientists, engineers, and other stakeholders.
In my paper, I discuss, at length (subject to editorial size limits, unfortunately), research on AI biases and health equity in relation to advances in artificial intelligence.
I pose 12 questions for healthcare regulators, not expecting answers but aiming to initiate a conversation, specifically about AI biases and health equity:
Which regulatory body in U.S. healthcare is willing to take the lead in discussions about applying AI to clinical practices and healthcare overall? One big reason patients and providers hesitate to implement the latest AI innovations in healthcare is the current lack of industry-accepted best practices. The FDA, WAME, AMA, and AAFP have all released statements following the launch of OpenAI’s ChatGPT. Notably, only the AMA and The Artificial Intelligence Industry Innovation Coalition (AI3C) specifically addressed health equity. However, no meaningful action has followed these statements. It’s time for someone to step up!
How can we ensure that the racial biases evident in the 2016 Medicaid algorithm and the 2019 UnitedHealth’s Optum algorithm are not repeated? Are we confident these algorithms have been rectified or replaced?
Will healthcare regulators and stakeholders consider ‘algorithmovigilance’ (or ‘policy layers’) as a cost-effective solution to counteract AI biases in health systems?
Will regulatory strategies in healthcare prioritize bias mitigation throughout the AI algorithm lifecycle?
Are health organizations advocating for ‘AI fairness’, akin to the approach in Obermeyer et al.’s 2019 study?
What are the medical community’s views on so-called “de-biasing” algorithms addressing racial, gender, and algorithmic biases, as seen in implementations by IBM and MIT?
Would the medical community benefit more by adjusting data collection policies to fight bias, as suggested by Stanford researchers?
Should approaches to mitigate AI biases in primary care differ from those in specialty areas? (Healthcare journal)
How can AI assist in areas in dire need of support, such as primary care, regarding infrastructure upgrades, delivery transformation, evaluation modernization, algorithm marketing authorization, reimbursement, while promoting social justice and health equity?
How will health organizations address ethical and governance concerns prior to AI model development?
How do we ensure AI inclusivity for individuals identifying beyond the binary gender spectrum to ensure health equity? Current digital health systems (DHS) inadequately capture nuanced gender, sex, and sexual orientation (GSSO) data.
How will third-party technology companies be monitored regarding potential AI biases and errors, especially when profit motives might eclipse collective welfare? Preventing coded biases – those unintentionally integrated into AI models – is essential to ensure fair outcomes in healthcare.
Let’s start this dialogue on AI regulations in healthcare.
Keep reading with a 7-day free trial
Subscribe to AI Health Uncut to keep reading this post and get 7 days of free access to the full post archives.