Breast density is an important consideration for breast cancer screening, where the amount of fibroglandular tissue in the breast can mask the detection of cancers. BI-RADS density grade estimates can result in high variability, prompting the need for an objective and reproducible assessment of breast density and tissue complexity. In this study, we investigate the utility of radiomic features to quantify texture and shape characteristics of tissue-specific regions of interest. Using Explainable AI (XAI), we identify key features for distinguishing breast density grade by computing each feature’s SHapley Additive exPlanations (SHAP) value. SHAP values measure a feature’s importance on the classifier’s prediction; the top SHAP value features from each density grade are selected as inputs to our classifier model. These features also identify relationships with clinical knowledge of breast cancer pathophysiology. Logistic regression classifiers fit to our radiomic features achieved a mean AUC per density grade class of [A : 0.949±0.055,B : 0.877±0.055,C : 0.884±0.023,D : 0.893±0.076] over nested five-fold cross-validation. Pooled confusion matrices show that class imbalance can affect the proposed method, particularly in density grades A and D. Furthermore, unsupervised clustering using Uniform Manifold Approximation and Projection (UMAP) on our radiomic feature set show inherent separability of the four density grades. The results of our preliminary analysis highlight how clinically interpretable radiomic features show promise as an important tool for breast cancer screening by preserving predictive performance while introducing AI explainability.
KEYWORDS: Education and training, Magnetic resonance imaging, Breast, Batch normalization, Deep learning, Spatial resolution, Image segmentation, Spatial learning, Reproducibility, Breast cancer
While standard of care breast MRI primarily includes T1-weighted (T1w) fat-suppressed images, nonfat-suppressed images are not always included but may be needed to detect fat necrosis or fatty lesions. With the advent of abbreviated MRI to increase the accessibility of MRI for breast cancer screening, it is unlikely that imaging exams will contain both fat- and nonfat-suppressed images. Additionally, nonfat-suppressed images are integral for downstream quantitative analyses. Deep learning has seen increased use in medical imaging for contrast synthesis; however, there is limited work in the breast. This study aims to develop a reproducible, modular deep learning framework called Sat2Nu for generating nonfat-suppressed images from fat-suppressed inputs with limited training data. We retrospectively analyzed 2D slices from 643 bilateral sagittal T1w MRI screening exams with corresponding fat- and nonfat-suppressed scans from the University of Pennsylvania. One central slice was selected from each breast to yield 1,286 2D images. We trained a U-Net architecture on the entire dataset, where nonfat-suppressed images served as the ground truth. We randomly selected 20% of the data as an in-distribution validation set. The normalized root mean square error (NRMSE) and structural similarity index (SSIM) were used as performance metrics. We achieved a training NRMSE and SSIM of 0.143 and 0.855, respectively. Validation metrics on the in-distribution validation set were, respectively, 0.099 and 0.889. In conclusion, our preliminary results demonstrate representational capacity for the network to learn nonfat-suppressed contrast from fat-suppressed MRIs, which could develop into a promising solution for generating missing scans in the abbreviated setting and for downstream quantitative analyses dependent on nonfat-suppressed images. Current efforts include external validation and investigating other generative networks and loss functions for improving generalizability. Importantly, we are focusing on designing a reproducible pipeline that would allow future users to easily implement different architectures.
The aim of this retrospective case-cohort study was to perform additional validation of an artificial intelligence (AI)-driven breast cancer risk model in a racially diverse cohort of women undergoing screening. We included 176 breast cancer cases with non-actionable mammographic screening exams 3 months to 2 years prior to cancer diagnosis and a random sample of 4,963 controls from women with non-actionable mammographic screening exams and at least one-year of negative follow-up (Hospital University Pennsylvania, PA, USA; 9/1/2010-1/6/2015). A risk score for each woman was extracted from full-field digital mammography (FFDM) images via an AI risk prediction model, previously developed and validated in a Swedish screening cohort. The performance of the AI risk model was assessed via age-adjusted area under the ROC curve (AUC) for the entire cohort, as well as for the two largest racial subgroups (White and Black). The performance of the Gail 5-year risk model was also evaluated for comparison purposes. The AI risk model demonstrated an AUC for all women = 0.68 95% CIs [0.64, 0.72]; for White = 0.67 [0.61, 0.72]; for Black = 0.70 [0.65, 0.76]. The AI risk model significantly outperformed the Gail risk model for all women (AUC = 0.68 vs AUC = 0.55, p<0.01) and for Black women (AUC = 0.71 vs AUC = 0.48, p<0.01), but not for White women (AUC = 0.66 vs AUC = 0.61, p=0.38). Preliminary findings in an independent dataset suggest a promising performance of the AI risk prediction model in a racially diverse breast cancer screening cohort.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.