Wrong diagnoses were also made in women with dense, fatty breasts.
The FDA-approved AI algorithm called ProFound AI was more likely to show a false-positive cancer diagnosis in the mammograms of Black women, when compared to White, Hispanic or Asian women, new research shows. Results from the study were published Tuesday in Radiology.
A team of researchers from the Duke University School of Medicine led by Yinhao Ren, Ph.D. and Derek L. Nguyen, looked at a diverse set of breast cancer screenings performed at the Duke University Medical Center between 2016 and 2019. Scan demographics were 27% White, 26% Black, 28% Asian and 19% Hispanic with an average age of 54. All scans were negative at the time of the tests. When researchers ran all scrans through the algorithm, they found that 17% of cases (816 of 4855) were positive for cancer. In addition, Black patients were 45% more likely to have a false positive test score than White patients.
Due to burnout and staffing shortages in radiologists, the use of artificial intelligence is looking more and more promising, even though the impact of patient characteristics such as race has not been thoroughly studied, the authors write. Easier turnaround of test results is especially attractive since many breast clinics are switching from traditional digital mammography to digital breast tomosynthesis, or 3D mammography, which is more comprehensive and but takes radiologists twice as long to read.
Although this study only focused on one AI vendor, it’s likely these inaccuracies are widespread, the authors note. The FDA currently does not require AI to be trained on diverse data sets, which contributes to the inaccuracies. AI software companies are also reluctant to reveal their training methods, which are classified as intellectual property.
If future AI models are trained on images with little diversity, these issues will persist, leading to an overdiagnosis of cancer and unnecessary fear in patients. This may ultimately widen healthcare disparities in communitiesand breed distrust in AI, which has potential to be of use if used correctly.
“The Food and Drug Administration should provide clear guidance on the demographic characteristics of samples used to develop algorithms, and vendors should be transparent about how their algorithms were developed,” the researchers write. “Continued efforts to train future AI algorithms on diverse data sets are needed to ensure standard performance across all patient populations.”
How Financial Toxicity Screening Can Be Incorporated Into Everyday Healthcare in America
November 15th 2024Breast cancer treatment settings prove to be a good opportunity to talk about financial toxicity. These conversations can also happen in generalized healthcare, according to Laila Gharzai, M.D., LLM, from the Department of Radiation Oncology at Northwestern University.
Read More
Breast Cancer Patients Desire Early, Frequent Financial Screening
November 11th 2024Current financial screening procedures in the United States may need to change, according to recent research done by Laila Gharzai, M.D., LLM, from the Department of Radiation Oncology at Northwestern University.
Read More
Patient Advocacy Groups and Caretaker Diversity in Metastatic Breast Cancer Research
October 22nd 2024Stephanie Graff, M.D., FACP, FASCO, director of breast oncology at the Lifespan Cancer Institute and author of Investigating the Salience of Clinical Meaningfulness and Clinically Meaningful Outcomes in Metastatic Breast Cancer Care Delivery, shares the reasons why she chose to study metastatic breast cancer patients.
Read More
Differences in Defining 'Clinically Meaningful' in Metastatic Breast Cancer Care
October 4th 2024Stephanie Graff, M.D., FACP, FASCO, director of breast oncology at the Lifespan Cancer Institute, explains the importance of the term “clinically meaningful” and shares some of the ways it can be defined.
Read More