A study has found that an artificial intelligence (AI) medical diagnostic system can recognize a person’s race from an X-ray with over 90% accuracy. Researchers don’t know how, reports the boston globe.
Concerns have been raised that the AI-powered diagnostic process could generate inherently biased results with the potential to mislead human physicians into recommending a particular treatment for patients of a particular race.
Researchers from Massachusetts Institute of Technology (MIT) and Harvard Medical School were part of the research project which included scientists from the United States, Australia, Canada and Taiwan. An article explaining the study results was published last month in the medical journal The Lancet Digital Health.
As the Boston Globe reports, the researchers began the process by training algorithms using standard datasets of X-rays and CT scans where each image, taken from different parts of the human body, was labeled with race. of the person. But the diagnostic images processed by the computer software had no obvious indication of the race of the people whose images were taken.
The system was able to tell the race of people in x-ray images, differentiating between black and white patients, after looking at race-labeled images and another set of unlabeled images.
Marzyeh Ghassemi, assistant professor of electrical engineering and computer science at MIT, and Leo Anthony Celi, associate professor at Harvard Medical School, co-author of the research report, say those who conducted the study are unable to explain exactly how the AI system was able to recognize the races of the patients from their x-ray images. Ghassemi speculates that the scans can pick up skin pigment melanin, even though it is not visible to human observers.
The Globe, however, quotes Alan Goodman, professor of biological anthropology at Hampshire College and another co-author of the research report, as saying he does not believe the study results reflect the innate differences that exist between people of different races. different. Goodman thinks the differences detected are likely caused by geography rather than race.
“Instead of using race, if they looked at someone’s geographic coordinates, would the machine do as well?” My feeling is that the machine would do just as well. You call this race. I call it geographic variation,” Goodman said, though he also expressed uncertainty about how an AI system can determine geographic differences from X-rays alone.
The researchers cautioned doctors against rushing to adopt such AI-based diagnostic systems that have the potential to automatically produce biased results based on differences in certain human characteristics, the Globe mentions.
Transparency and explainability of AI are considered necessary to avoid problems, including bias. Efforts to increase the explainability of AI have included principles from NIST and a UK government standard for transparency.
precision | AI | explainability | Research and development