Open Access
26 April 2023 Toward fairness in artificial intelligence for medical image analysis: identification and mitigation of potential biases in the roadmap from data collection to model deployment
Author Affiliations +
Abstract

Purpose

To recognize and address various sources of bias essential for algorithmic fairness and trustworthiness and to contribute to a just and equitable deployment of AI in medical imaging, there is an increasing interest in developing medical imaging-based machine learning methods, also known as medical imaging artificial intelligence (AI), for the detection, diagnosis, prognosis, and risk assessment of disease with the goal of clinical implementation. These tools are intended to help improve traditional human decision-making in medical imaging. However, biases introduced in the steps toward clinical deployment may impede their intended function, potentially exacerbating inequities. Specifically, medical imaging AI can propagate or amplify biases introduced in the many steps from model inception to deployment, resulting in a systematic difference in the treatment of different groups.

Approach

Our multi-institutional team included medical physicists, medical imaging artificial intelligence/machine learning (AI/ML) researchers, experts in AI/ML bias, statisticians, physicians, and scientists from regulatory bodies. We identified sources of bias in AI/ML, mitigation strategies for these biases, and developed recommendations for best practices in medical imaging AI/ML development.

Results

Five main steps along the roadmap of medical imaging AI/ML were identified: (1) data collection, (2) data preparation and annotation, (3) model development, (4) model evaluation, and (5) model deployment. Within these steps, or bias categories, we identified 29 sources of potential bias, many of which can impact multiple steps, as well as mitigation strategies.

Conclusions

Our findings provide a valuable resource to researchers, clinicians, and the public at large.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Karen Drukker, Weijie Chen, Judy W. Gichoya, Nicholas P. Gruszauskas, Jayashree Kalpathy-Cramer, Sanmi Koyejo, Kyle J. Myers, Rui C. Sá, Berkman Sahiner, Heather M. Whitney, Zi Zhang, and Maryellen L. Giger "Toward fairness in artificial intelligence for medical image analysis: identification and mitigation of potential biases in the roadmap from data collection to model deployment," Journal of Medical Imaging 10(6), 061104 (26 April 2023). https://doi.org/10.1117/1.JMI.10.6.061104
Received: 30 January 2023; Accepted: 3 April 2023; Published: 26 April 2023
Lens.org Logo
CITATIONS
Cited by 18 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Data modeling

Education and training

Medical imaging

Artificial intelligence

Performance modeling

Data acquisition

Evolutionary algorithms

Back to Top