Glioblastoma Multiforme (GBM) is one of the most malignant brain tumors among all high-grade brain cancers. Temozolomide (TMZ) is the first-line chemotherapeutic regimen for glioblastoma patients. The methylation status of the O6-methylguanine-DNA-methyltransferase (MGMT) gene is a prognostic biomarker for tumor sensitivity to TMZ chemotherapy. However, the standardized procedure for assessing the methylation status of MGMT is an invasive surgical biopsy, and accuracy is susceptible to resection sample and heterogeneity of the tumor. Recently, radio-genomics which associates radiological image phenotype with genetic or molecular mutations has shown promise in the non-invasive assessment of radiotherapeutic treatment. This study proposes a machine-learning framework for MGMT classification with uncertainty analysis utilizing imaging features extracted from multimodal magnetic resonance imaging (mMRI). The imaging features include conventional texture, volumetric, and sophisticated fractal, and multi-resolution fractal texture features. The proposed method is evaluated with publicly available BraTS-TCIA-GBM pre-operative scans and TCGA datasets with 114 patients. The experiment with 10-fold cross-validation suggests that the fractal and multi-resolution fractal texture features offer an improved prediction of MGMT status. The uncertainty analysis using an ensemble of Stochastic Gradient Langevin Boosting models along with multi-resolution fractal features offers an accuracy of 71.74% and area under the curve of 0.76. Finally, analysis shows that our proposed method with uncertainty analysis offers improved predictive performance when compared with different well-known methods in the literature.
An updated classification of diffuse lower-grade gliomas is established in the 2016 World Health Organization Classification of Tumors of the Central Nervous System based on their molecular mutations such as TP53 mutation. This study investigates machine learning methods for TP53 mutation status prediction and classification using radiomics and genomics features, respectively. Radiomics features represent patients' age and imaging features that are extracted from conventional MRI. Genomics feature is represented by patients’ gene expression using RNA sequencing. This study uses a total of 105 LGG patients, where the patient dataset is divided into a training set (80 patients) and testing set (25 patients). Three TP53 mutation prediction models are constructed based on the source of the training features; TP53-radiomics model, TP53-genomics model, and TP53-radiogenomics model, respectively. Radiomics feature selection is performed using recursive feature selection method. For genomics data, EdgeR method is utilized to select the differentially expressed genes between the mutated TP53 versus the non-mutated TP53 cases in the training set. The training classification model is constructed using Random Forest and cross-validated using repeated 10-fold cross validation. Finally, the predictive performance of the three models is assessed using the testing set. The three models, TP53-Radiomics, TP53-RadioGenomics, and TP53-Genomics, achieve a predictive accuracy of 0.84±0.04, 0.92±0.04, and 0.89±0.07, respectively. These results show promise of non-invasive MRI radiomics features and fusion of radiomics with genomics features for prediction of TP53.
A glioma grading method using conventional structural magnetic resonance image (MRI) and molecular data from patients is proposed. The noninvasive grading of glioma tumors is obtained using multiple radiomic texture features including dynamic texture analysis, multifractal detrended fluctuation analysis, and multiresolution fractal Brownian motion in structural MRI. The proposed method is evaluated using two multicenter MRI datasets: (1) the brain tumor segmentation (BRATS-2017) challenge for high-grade versus low-grade (LG) and (2) the cancer imaging archive (TCIA) repository for glioblastoma (GBM) versus LG glioma grading. The grading performance using MRI is compared with that of digital pathology (DP) images in the cancer genome atlas (TCGA) data repository. The results show that the mean area under the receiver operating characteristic curve (AUC) is 0.88 for the BRATS dataset. The classification of tumor grades using MRI and DP images in TCIA/TCGA yields mean AUC of 0.90 and 0.93, respectively. This work further proposes and compares tumor grading performance using molecular alterations (IDH1/2 mutations) along with MRI and DP data, following the most recent World Health Organization grading criteria, respectively. The overall grading performance demonstrates the efficacy of the proposed noninvasive glioma grading approach using structural MRI.
Diffuse or infiltrative gliomas are a type of Central Nervous System (CNS) brain tumor. Among different types of primary CNS tumors, diffuse low-grade gliomas (LGG) are World Health Organization (WHO) Grade II and III gliomas. This study investigates the prediction of LGG progression using imaging features extracted from conventional MRI. First, we extract the imaging features from raw MRI including intensity, and fractal and multiresolution fractal representations the of the MRI tumor volume. This study uses a total of 108 LGG patients that is divided into 75% of the patients for training and the remaining 25% of the patients for testing from a pre-operative TCGA-LGG data. LGG progression prediction training model is performed using nested Leave-one-out cross-validation (LOOCV) on the training set. Recursive feature selection (RFS) method and LGG progression model training are performed in the inner cross-validation loop. The LGG progression prediction model is trained using Extreme Gradient Boosting technique. The performance of LGG progression prediction model is estimated using the outer cross-validation loop. Finally, we assess the predictive performance of the LGG progression model using the testing set. The training and testing procedures are repeated 10 times using 10 different training and testing sets. Our LGG progression prediction model achieves an AUC of 0.81±0.03, a sensitivity of 0.81±0.09, and a specificity of 0.81±0.10. Our results show promise of using non-invasive MRI in predicting LGG progression.
Computational modeling in medical image analysis may play a critical role in surgical treatment and therapy. Many hand-crafted feature-extraction and learning based methods have been proposed for automatic brain tumor segmentation and patient survival prediction from MRI. This work first reviews few of these methods. We then discuss our recent experience with global challenge in developing computational methods for state-of-the-art brain tumor segmentation algorithms and patient survival prediction analysis.
Brain tumor segmentation is a fundamental step in surgical treatment and therapy. Many hand-crafted and learning based methods have been proposed for automatic brain tumor segmentation from MRI. Studies have shown that these approaches have their inherent advantages and limitations. This work proposes a semantic label fusion algorithm by combining two representative state-of-the-art segmentation algorithms: texture based hand-crafted, and deep learning based methods to obtain robust tumor segmentation. We evaluate the proposed method using publicly available BRATS 2017 brain tumor segmentation challenge dataset. The results show that the proposed method offers improved segmentation by alleviating inherent weaknesses: extensive false positives in texture based method, and the false tumor tissue classification problem in deep learning method, respectively. Furthermore, we investigate the effect of patient’s gender on the segmentation performance using a subset of validation dataset. Note the substantial improvement in brain tumor segmentation performance proposed in this work has recently enabled us to secure the first place by our group in overall patient survival prediction task at the BRATS 2017 challenge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.