In response to the critical need for timely and precise detection of lung lesions, we explored an innovative active learning approach for optimally selecting training data for deep-learning segmentation of computed tomography scans from nonhuman primates. Our guiding hypothesis was that by maximizing the information within a training set—accomplished by choosing images uniformly distributed in n-dimensional radiomic feature space—we may attain similar or superior segmentation results to random dataset selection, despite the use of fewer labeled images. To test this hypothesis, we compared segmentation models trained on different subsets of the available training data. Subsets that maximized the diversity among datasets (i.e., diverse data) were compared with subsets that minimized diversity among datasets (i.e., concentrated data) and randomly chosen subsets (i.e., random data). A two-tiered feature-selection technique was used to reduce the radiomic feature space to reliable, relevant, and non-redundant features. We generated learning curves to assess the model performance as a function of the number of training dataset samples. We found that models trained on uniformly distributed data consistently outperformed those trained on concentrated data, achieving higher median test Dice scores with less variance. These results suggest that active learning and intelligent selection of data that are diverse and uniformly distributed within a radiomic feature space can significantly enhance segmentation model performance. This improvement has substantial implications for optimizing lung lesion characterization, disease management, and evaluation of treatments and underscores the potential benefit of active learning and intelligent data selection in medical imaging segmentation tasks.
PurposeWe describe a method to identify repeatable liver computed tomography (CT) radiomic features, suitable for detection of steatosis, in nonhuman primates. Criteria used for feature selection exclude nonrepeatable features and may be useful to improve the performance and robustness of radiomics-based predictive models.ApproachSix crab-eating macaques were equally assigned to two experimental groups, fed regular chow or an atherogenic diet. High-resolution CT images were acquired over several days for each macaque. First-order and second-order radiomic features were extracted from six regions in the liver parenchyma, either with or without liver-to-spleen intensity normalization from images reconstructed using either a standard (B-filter) or a bone-enhanced (D-filter) kernel. Intrasubject repeatability of each feature was assessed using a paired t-test for all scans and the minimum p-value was identified for each macaque. Repeatable features were defined as having a minimum p-value among all macaques above the significance level after Bonferroni’s correction. Features showing a significant difference with respect to diet group were identified using a two-sample t-test.ResultsA list of repeatable features was generated for each type of image. The largest number of repeatable features was achieved from spleen-normalized D-filtered images, which also produced the largest number of second-order radiomic features that were repeatable and different between diet groups.ConclusionsRepeatability depends on reconstruction kernel and normalization. Features were quantified and ranked based on their repeatability. Features to be excluded for more robust models were identified. Features that were repeatable but different between diet groups were also identified.
Evaluation of the intra-subject reproducibility of radiomic features is pivotal but challenging because it requires multiple replicate measurements, typically lacking in the clinical setting. Radiomics analysis based on computed tomography (CT) has been increasingly used to characterize liver malignancies and liver diffusive diseases. However, radiomic features are greatly affected by scanning parameters and reconstruction kernels, among other factors. In this study, we examined the effects of diets, reconstruction kernels, and liver-to-spleen normalization on the intra-subject reproducibility of radiomic features. The final goal of this work is to create a framework that may help identify reproducible radiomics features suitable for further diagnosis and grading of fatty liver disease in nonhuman primates using radiomics analysis. As a first step, the identification of reproducible features is essential. To accomplish this aim, we retrospectively analyzed serial CT images from two groups of crab-eating macaques, fed a normal or atherogenic diet. Serial CT examinations resulted in 45 high-resolution scans. From each scan, two CT images were reconstructed using a standard B kernel and a bone-enhanced D kernel, with and without normalization relative to the spleen. Radiomic features were extracted from six regions in the liver parenchyma. Intra-subject variability showed that many features are fully reproducible regardless of liver disease status whereas others are significantly different in a limited number of tests. Features significantly different between the normal and atherogenic diet groups were also investigated. Reproducible features were listed, with normalized images having more reproducible features.
PurposeWe propose a method to identify sensitive and reliable whole-lung radiomic features from computed tomography (CT) images in a nonhuman primate model of coronavirus disease 2019 (COVID-19). Criteria used for feature selection in this method may improve the performance and robustness of predictive models.ApproachFourteen crab-eating macaques were assigned to two experimental groups and exposed to either severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) or a mock inoculum. High-resolution CT scans were acquired before exposure and on several post-exposure days. Lung volumes were segmented using a deep-learning methodology, and radiomic features were extracted from the original image. The reliability of each feature was assessed by the intraclass correlation coefficient (ICC) using the mock-exposed group data. The sensitivity of each feature was assessed using the virus-exposed group data by defining a factor R that estimates the excess of variation above the maximum normal variation computed in the mock-exposed group. R and ICC were used to rank features and identify non-sensitive and unstable features.ResultsOut of 111 radiomic features, 43% had excellent reliability (ICC > 0.90), and 55% had either good (ICC > 0.75) or moderate (ICC > 0.50) reliability. Nineteen features were not sensitive to the radiological manifestations of SARS-CoV-2 exposure. The sensitivity of features showed patterns that suggested a correlation with the radiological manifestations.ConclusionsFeatures were quantified and ranked based on their sensitivity and reliability. Features to be excluded to create more robust models were identified. Applicability to similar viral pneumonia studies is also possible.
As of 14 December 2021, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the virus that causes coronavirus disease 2019 (COVID-19), caused nearly 269 million confirmed cases and almost 5.3 million deaths worldwide. Chest computed tomography (CT) has high diagnostic sensitivity for the detection of pulmonary disease in COVID-19 patients. Toward timely and accurate clinical evaluation and prognostication, radiomic analyses of CT images have been explored to investigate the correlation of imaging and non-imaging clinical manifestations and outcomes. Delta (∆) radiomics optimally performed from pre-infection to the post-critical phase, requires baseline data typically not obtained in clinical settings; additionally, their robustness is affected by differences in acquisition protocols. In this work, we investigated the reliability, sensitivity, and stability of whole-lung radiomic features of CT images of nonhuman primates either mock-exposed or exposed to SARS-CoV-2 to study imaging biomarkers of SARS-CoV-2 infection. Images were acquired at a pre-exposure baseline and post-exposure days, and lung fields were segmented. The reliability of radiomic features was assessed, and the dynamic range of each feature was compared to the maximum normal intra-subject variation and ranked.
A glioma grading method using conventional structural magnetic resonance image (MRI) and molecular data from patients is proposed. The noninvasive grading of glioma tumors is obtained using multiple radiomic texture features including dynamic texture analysis, multifractal detrended fluctuation analysis, and multiresolution fractal Brownian motion in structural MRI. The proposed method is evaluated using two multicenter MRI datasets: (1) the brain tumor segmentation (BRATS-2017) challenge for high-grade versus low-grade (LG) and (2) the cancer imaging archive (TCIA) repository for glioblastoma (GBM) versus LG glioma grading. The grading performance using MRI is compared with that of digital pathology (DP) images in the cancer genome atlas (TCGA) data repository. The results show that the mean area under the receiver operating characteristic curve (AUC) is 0.88 for the BRATS dataset. The classification of tumor grades using MRI and DP images in TCIA/TCGA yields mean AUC of 0.90 and 0.93, respectively. This work further proposes and compares tumor grading performance using molecular alterations (IDH1/2 mutations) along with MRI and DP data, following the most recent World Health Organization grading criteria, respectively. The overall grading performance demonstrates the efficacy of the proposed noninvasive glioma grading approach using structural MRI.
Chordoma is a rare type of tumor that usually appears in the bone near the spinal cord and skull base. Due to their location in the skull base and diverse appearance in size and shape, automatic segmentation of chordoma tumors from magnetic resonance images (MRI) is a challenging task. In addition, similar MR intensity distributions of different anatomical regions, specifically sinuses, make the segmentation task from MRI more challenging. In comparison, most of the state-of-the-art lesion segmentation methods are designed to segment pathologies inside the brain. In this work, we propose an automatic chordoma segmentation framework using two cascaded 3D convolutional neural networks (CNN) via an auto-context model. While the first network learns to detect all potential tumor voxels, the second network fine-tunes the classifier to distinguish true tumor voxels from the false positives detected by the first network. The proposed method is evaluated using multi-contrast MR images of 22 longitudinal scans from 8 patients. Preliminary results showed a linear correlation of 0.71 between the detected and manually outlined tumor volumes, compared to 0.40 for a random forest (RF) based method. Furthermore, the response of tumor growth over time, i.e. increasing, decreasing, or stable, is evaluated according to the response evaluation criteria in solid tumors with an outcome of 0.26 kappa coefficient, compared to 0.13 for the RF based method.
In this work, we propose a novel method to improve texture based tumor segmentation by fusing cell density patterns that are generated from tumor growth modeling. To model tumor growth, we solve the reaction-diffusion equation by using Lattice-Boltzmann method (LBM). Computational tumor growth modeling obtains the cell density distribution that potentially indicates the predicted tissue locations in the brain over time. The density patterns is then considered as novel features along with other texture (such as fractal, and multifractal Brownian motion (mBm)), and intensity features in MRI for improved brain tumor segmentation. We evaluate the proposed method with about one hundred longitudinal MRI scans from five patients obtained from public BRATS 2015 data set, validated by the ground truth. The result shows significant improvement of complete tumor segmentation using ANOVA analysis for five patients in longitudinal MR images.
This work proposes a computationally efficient cell nuclei morphologic feature analysis technique to characterize the brain gliomas in tissue slide images. In this work, our contributions are two-fold: 1) obtain an optimized cell nuclei segmentation method based on the pros and cons of the existing techniques in literature, 2) extract representative features by k-mean clustering of nuclei morphologic features to include area, perimeter, eccentricity, and major axis length. This clustering based representative feature extraction avoids shortcomings of extensive tile [1] [2] and nuclear score [3] based methods for brain glioma grading in pathology images. Multilayer perceptron (MLP) is used to classify extracted features into two tumor types: glioblastoma multiforme (GBM) and low grade glioma (LGG). Quantitative scores such as precision, recall, and accuracy are obtained using 66 clinical patients’ images from The Cancer Genome Atlas (TCGA) [4] dataset. On an average ~94% accuracy from 10 fold crossvalidation confirms the efficacy of the proposed method.
We propose a novel non-invasive brain tumor type classification using Multi-fractal Detrended Fluctuation Analysis (MFDFA) [1] in structural magnetic resonance (MR) images. This preliminary work investigates the efficacy of the MFDFA features along with our novel texture feature known as multifractional Brownian motion (mBm) [2] in classifying (grading) brain tumors as High Grade (HG) and Low Grade (LG). Based on prior performance, Random Forest (RF) [3] is employed for tumor grading using two different datasets such as BRATS-2013 [4] and BRATS-2014 [5]. Quantitative scores such as precision, recall, accuracy are obtained using the confusion matrix. On an average 90% precision and 85% recall from the inter-dataset cross-validation confirm the efficacy of the proposed method.
In this work, we propose a fully automatic brain tumor and edema segmentation technique in brain magnetic resonance (MR) images. Different brain tissues are characterized using the novel texture features such as piece-wise triangular prism surface area (PTPSA), multi-fractional Brownian motion (mBm) and Gabor-like textons, along with regular intensity and intensity difference features. Classical Random Forest (RF) classifier is used to formulate the segmentation task as classification of these features in multi-modal MRIs. The segmentation performance is compared with other state-of-art works using a publicly available dataset known as Brain Tumor Segmentation (BRATS) 2012 [1]. Quantitative evaluation is done using the online evaluation tool from Kitware/MIDAS website [2]. The results show that our segmentation performance is more consistent and, on the average, outperforms other state-of-the art works in both training and challenge cases in the BRATS competition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.