Why Do Cops Trust AI, But Doctors Don't? My 7 Reasons.
It's the 'catch the bad guy at all costs' mentality versus the 'protect the patient at all costs' mentality. This reflects their very different goals and objectives.
Last week, I attended the first-ever FTC-organized AI Summit. I was surprised to find myself agreeing with one of the conclusions of the conference: for sensitive areas such as the privacy of personal information and medical information in particular (video, audio, and text), implementing a (shitty) AI without vetting and approval could be less safe than deploying it. FTC’s conclusion: We don’t need AI police at a corporate level, and I would add, at a government level too, especially if there is a real possibility that this AI policing is putting consumers in danger.
Why was I surprised? Well, the FTC has a history of signaling big action, but after all the big talk, they end up going after low-hanging fruit, particularly in healthcare. Antitrust actions are FTC’s job number 1, in my opinion. Yet, the FTC has been afraid of going after some of the biggest monopolies in the world, the ones that are destroying American healthcare, stifling competition, blocking innovation, and imposing pain on patients and physicians. We know who you are: UnitedHealth, Cigna, CVS, Elevance, Humana, Epic, Cerner, and others.
Instead, the FTC is often involved in impeding innovation and inhibiting new ideas. The FTC’s latest target: Artificial Intelligence (AI). The FTC has gone after AI to such an extent that some started calling it “the Federal AI Commission”.
Even at this AI conference, FTC chair Lina Khan has used statements like:
“There is no AI exemption from the laws on the books.”
—Lina Khan, chair of the FTC
“Firms cannot use claims of innovation as cover for lawbreaking.”
—Lina Khan, chair of the FTC
But going back to the topic of consumer and patient privacy, that’s another important role of the FTC, and I do applaud them for going after the perpetrators.
What does it have to do with cops versus doctors versus AI? I seemingly start with a detour, but I promise it’s relevant, at least in my head. 😉
First, I want to talk about two examples of violation of privacy: one creepy and outright wrong, the other highly questionable.
Here is the outline of the article:
1️⃣ Rite Aid’s Creepy AI Facial Recognition: When an Entity Has Nothing to Lose, It Makes Dangerous Decisions.
2️⃣ Police Are Using Untested Face Recognition Technology to Identify Suspects.
3️⃣ So, Why Are Doctors and Cops Different in Their Attitude Toward AI? My 7 Reasons.
4️⃣ Algorithmic Unfairness and AI Biases: How Do We Fight Them?
5️⃣ Conclusion.
1. Rite Aid’s Creepy AI Facial Recognition: When an Entity Has Nothing to Lose, It Makes Dangerous Decisions
According to the FTC charges, for many years, allegedly from 2012 to 2020, Rite Aid has been involved in what I view as the creepiest violation of privacy. Corporations and other entities have been taking pictures and videos of customers, including children, with or without their consent, and storing those on their servers “to improve customer experience” and to prevent shoplifting. (I will talk about how corporations are re-selling your genetic, geographic, email, and other personal data in one of my next articles.)
What’s really disturbing is that Rite Aid was already charged by the FTC for customer privacy violations in 2010, when it exposed sensitive health information. But at the time, Rite Aid’s lobby in Washington, D.C., was so strong that the FTC “was convinced” to settle the charges for $1 million. Are you kidding me? A $1 million “penalty” for a company that made $25 billion in revenue at the time? This is such a clear example of why lobbying should be illegal. Instead of deterring the violator and protecting customers, the FTC gave Rite Aid the signal that it’s OK to violate their customer privacy. Sure enough, they continued with that practice.
Turns out, privacy violations were just the tip of the iceberg for Rite Aid. The company has been losing customers and making bad financial decisions, which finally, on October 15, 2023, led to filing for Chapter 11 bankruptcy. A company once valued at $13 billion is now valued at $22 million and has $3.3 billion in debt (Source: CNN.)
Perhaps not coincidentally, the FTC was also looking into Rite Aid’s questionable practices at the same time. My only question is: Why wasn’t Rite Aid on the FTC’s watch list for the past 12 years, since it was a repeat offender?
The FTC alleged that Rite Aid’s “AI system was “profoundly flawed.” Here are some examples:
▶️ A Rite Aid employee stopped and searched an 11-year-old girl because of a false match. The girl’s mother reported that she missed work because her daughter was so distraught about the incident.
▶️ Rite Aid employees called the police on a customer because the technology generated an alert against an image that was later described as depicting “a white lady with blonde hair.” The customer was Black!
▶️ Many other customers – people shopping for food, medicine, and other basics – were wrongly searched, accused, and expelled from Rite Aid stores. Sometimes, they were humiliated in front of their bosses, coworkers, or families.
▶️ It has been clear for years that facial recognition systems can perform less effectively and make egregious mistakes for people with darker skin and women. Yet, Rite Aid has done nothing to address this.
▶️ Besides racial bias, the Rite Aid AI algorithm also had a gender bias. As also seen in hiring when employers develop résumé screening models, trained on their predominantly male workforce, that spuriously reject women. We also see it in housing when advertising platforms steer housing ads away from people based on their sex and estimated race or ethnicity. And we see it in credit when a lender’s pricing models charge applicants who attended a Historically Black College or University higher rates for refinancing a student loan than similarly situated applicants who did not.
In my research, I’ve been asking the same question:
And I still have no good answer.
In the Rite Aid case, the reasons why the AI algorithm was biased include:
🤖 Trained on a sample predominantly including white people, similar to Medicaid circa 2016 and UnitedHealth / Optum circa 2019. Exactly the same reason.
🤖 The training sample included high-quality images. In reality, the CCTV images in Rite Aid stores were of lower quality than those in the training set. (From FTC Commissioner Alvaro M. Bedoya’s statement: “It used low-quality images that were unsuitable for automated analysis, including those captured using closed-circuit TV cameras and media reports.”)
Why hasn’t Rite Aid done proper vetting and validation of the AI algorithm?
🚨 Rite Aid has been on the brink of bankruptcy for many years now. The animal is the most dangerous when it’s wounded because it has nothing to lose. Rite Aid has absolutely nothing to lose. It had no due diligence, no fiduciary duty, perhaps it didn’t want to spend money that it didn’t really have on expensive validation.
🚨 Rite Aid had an “oral assessment” of this particular AI vendor. (I’ve never heard of this one - so we now hire AI vendors by taking their word for it?)
Another issue is AI trust, which is the topic of this article. In the case of Rite Aid, the management decided that the less employees knew, the better. So Rite Aid employees were told nothing about the algorithm, and as far as they were concerned, the computer is 100% correct. Just like in the next example with the police, Rite Aid employees preferred not to ask their superiors additional questions and just assumed that ‘AI is always right’.
FTC Chair Khan’s speech at the AI Summit also highlighted the agency’s growing interest and appetite in ordering AI perpetrators to delete their models. For instance, the FTC banned Rite Aid from using facial recognition surveillance for five years and required it to delete all biometric data collected in connection with its surveillance. (Sources: FTC Tech Summit, Bloomberg Law.)
While there is a clear desire on the part of the FTC to punish violators, I’m not sure the FTC fully understands what it means ‘to delete an AI model’. There are a whole slew of issues with that statement I’m not going to get into right now. But one obvious one: AI models, once created, can be copied and distributed easily. Also, this AI model was licensed to Rite Aid by a third party vendor and has been licensed to and/or sold to other entities. So what does it really mean ‘to delete an AI model’?
FTC is sending a message to those corporate entities using facial recognition. The FTC thinks that there are a whole bunch of privacy violations and biases. The FTC stated that AI surveillance systems like the one in the Rite Aid fiasco make stores (and customers) less safe.
“No one should walk away from this settlement thinking that this Commission affirmatively supports the use of biometric surveillance in commercial settings.”
How can we stop corporations from using shitty AI and harming patients and customers?
I believe the next section is an important bridge to answering the question of why police are over-relying on AI. The situation is eerily similar to the Rite Aid facial recognition fiasco. However, the stakes are much higher here: innocent people are subject to arrest and possible imprisonment due to the AI’s ‘false positive’ errors.
And what do doctors have to do with all this? Let’s dig in…
2. Police Are Using Untested Face Recognition Technology to Identify Suspects
Keep reading with a 7-day free trial
Subscribe to AI Health Uncut to keep reading this post and get 7 days of free access to the full post archives.