October 17, 2022 - Karessa Weir
An MSU Political Science researcher has found the use of artificial intelligence in medicine may lead to marginalized people receiving inadequate care.
Dr. Ana Bracic, Assistant Professor, has published a new article in the journal Science that highlights the dangers of increasing exclusion of minority populations through the use of artificial intelligence in medicine.
“Exclusion and racial disparities are so intractable in medicine, despite efforts to reduce them on the part of physicians and health systems,” Dr. Bracic wrote.” The use of AI within a biased system will only make the problems worse.”
The article “Exclusion cycles: Reinforcing disparities in medicine” was co-authored by Shawneequa L. Callier (George Washington University School of Medicine and Health Science) and W. Nicholson Price II (University of Michigan Law School) and published in the September issue. The authors looked into clinical practice, data collection and medical AI.
They found that discrimination in the medical world leads some minoritized groups to have to self-advocate,withdraw from the system, or rely on other response strategies. The “exclusion cycle” occurs when the dominant group assumes the response strategies are inherent behaviors of the minoritized group and not responses to the discrimination.
The study focused on minoritized people, those marginalized by others with greater social power. While the study’s key examples considered Black patients in the United States, the group suggested similar situations could be at play for other groups minoritized on the basis of race, ethnicity, gender identity, disability or a combination of markers.
One example is that Black patients “are frequently believed to feel pain less severely, often based on the belief that Black patients are biologically different from white patients,” they wrote. “Accordingly, physicians are more likely to prescribe inadequate doses of pain treatment.” This discrimination can lead to Black patients leaving treatment, thus leading physicians to believe they did not need the treatment in the first place and increasing bias going forward.
When it comes to research participation, there is an entrenched perception that minoritized patients are less likely to participate in research which enables discriminatory practices of not recruiting minority participants, leading to less participation and further increasing biased beliefs.
And when it comes to artificial intelligence in medicine, the researchers have found “disturbingly similar dynamics of exclusion” which is especially problematic because one of the promises of AI is to decrease bias over time.
“AI systems themselves cannot have negative views of minoritized groups but the humans who write, validate and deploy AI may be racists or biased, especially given coders’ lack of diversity, leading to systems that incorporate anti-minority culture,” they found. “Even if AI systems are designed by unbiased coders striving for neutrality, those systems derive data from and exist within a medical system that has its own anti-minority culture.”
The team argued that biased or incomplete data fed into AI systems regarding minoritized populations is likely to be less accurate and will lead to lower-quality recommendations and analyses for those patients. For instance, algorithms trained to detect skin cancer often perform worse on patients with darker skin, because most of the datasets come from lighter skinned patients.
More awareness of the problems is growing, they said. The National Institutes of Health has created an “All of Us” initiative to develop more minority-based data for precision healthcare. But whether AI medicine will earn the trust of minoritized groups remains to be seen.
The solution, according to Dr. Bracic and her co-authors, is to treat the bias systemically with coordinated efforts to ensure both the AI systems and the practitioners guard against the introduction of self-reinforcing biases.
“Though the addition of big data and AI to medicine promises substantial gains, they complicate the picture for reducing bias and require careful efforts to ensure that progress on one front is not rapidly lost on another,” they wrote.