Purpose: The Breast Pathology Quantitative Biomarkers (BreastPathQ) Challenge was a Grand Challenge organized jointly by the international society for optics and photonics (SPIE), the American Association of Physicists in Medicine (AAPM), the U.S. National Cancer Institute (NCI), and the U.S. Food and Drug Administration (FDA). The task of the BreastPathQ Challenge was computerized estimation of tumor cellularity (TC) in breast cancer histology images following neoadjuvant treatment.
Approach: A total of 39 teams developed, validated, and tested their TC estimation algorithms during the challenge. The training, validation, and testing sets consisted of 2394, 185, and 1119 image patches originating from 63, 6, and 27 scanned pathology slides from 33, 4, and 18 patients, respectively. The summary performance metric used for comparing and ranking algorithms was the average prediction probability concordance (PK) using scores from two pathologists as the TC reference standard.
Results: Test PK performance ranged from 0.497 to 0.941 across the 100 submitted algorithms. The submitted algorithms generally performed well in estimating TC, with high-performing algorithms obtaining comparable results to the average interrater PK of 0.927 from the two pathologists providing the reference TC scores.
Conclusions: The SPIE-AAPM-NCI BreastPathQ Challenge was a success, indicating that artificial intelligence/machine learning algorithms may be able to approach human performance for cellularity assessment and may have some utility in clinical practice for improving efficiency and reducing reader variability. The BreastPathQ Challenge can be accessed on the Grand Challenge website.
The residual cancer burden index is a powerful prognostic factor which is used to measure neoadjuvant therapy response in invasive breast cancers. Tumor cellularity is one component of the residual cancer burden index and is currently measured manually through eyeballing. As such it is subject to inter- and intra-variability and is currently restricted to discrete values. We propose a method for automatically determining tumor cellularity in digital slides using deep learning techniques. We train a series of ResNet architectures to output both discrete and continuous values and compare our outcomes with scores acquired manually by an expert pathologist. Our configurations were validated on a dataset of image patches extracted from digital slides, each containing various degrees of tumor cellularity. Results showed that, in the case of discrete values, our models were able to distinguish between regions-of-interest containing tumor and healthy cells with over 97% test accuracy rates. Overall, we achieved 76% accuracy over four predefined tumor cellularity classes (no tumor/tumor; low, medium and high tumor cellularity). When computing tumor cellularity scores on a continuous scale, ResNet showed good correlations with manually-identified scores, showing potential for computing reproducible scores consistent with expert opinion using deep learning techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.