Paper
6 July 2018 Using a convolutional neural network to predict readers' estimates of mammographic density for breast cancer risk assessment
Georgia V. Ionescu, Martin Fergie, Michael Berks, Elaine F. Harkness, Johan Hulleman, Adam R. Brentnall, Jack Cuzick, D. Gareth Evans, Susan M. Astley
Author Affiliations +
Proceedings Volume 10718, 14th International Workshop on Breast Imaging (IWBI 2018); 107180D (2018) https://doi.org/10.1117/12.2318464
Event: The Fourteenth International Workshop on Breast Imaging, 2018, Atlanta, Georgia, United States
Abstract
Background: Mammographic density is an important risk factor for breast cancer. Recent research demonstrated that percentage density assessed visually using Visual Analogue Scales (VAS) showed stronger risk prediction than existing automated density measures, suggesting readers may recognise relevant image features not yet captured by automated methods.

Method: We have built convolutional neural networks (CNN) to predict VAS scores from full-field digital mammograms. The CNNs are trained using whole-image mammograms, each labelled with the average VAS score of two independent readers. They learn a mapping between mammographic appearance and VAS score so that at test time, they can predict VAS score for an unseen image. Networks were trained using 67520 mammographic images from 16968 women, and tested on a large dataset of 73128 images and case-control sets of contralateral mammograms of screen detected cancers and prior images of women with cancers detected subsequently, matched to controls on age, menopausal status, parity, HRT and BMI.

Results: Pearson's correlation coefficient between readers' and predicted VAS in the large dataset was 0.79 per mammogram and 0.83 per woman (averaging over all views). In the case-control sets, odds ratios of cancer in the highest vs lowest quintile of percentage density were 3.07 (95%CI: 1.97 - 4.77) for the screen detected cancers and 3.52 (2.22 - 5.58) for the priors, with matched concordance indices of 0.59 (0.55 - 0.64) and 0.61 (0.58 - 0.65) respectively.

Conclusion: Our fully automated method demonstrated encouraging results which compare well with existing methods, including VAS.
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Georgia V. Ionescu, Martin Fergie, Michael Berks, Elaine F. Harkness, Johan Hulleman, Adam R. Brentnall, Jack Cuzick, D. Gareth Evans, and Susan M. Astley "Using a convolutional neural network to predict readers' estimates of mammographic density for breast cancer risk assessment", Proc. SPIE 10718, 14th International Workshop on Breast Imaging (IWBI 2018), 107180D (6 July 2018); https://doi.org/10.1117/12.2318464
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Breast

Mammography

Cancer

Breast cancer

Convolutional neural networks

Network architectures

Health sciences

Back to Top