Ovarian carcinoma is the most lethal malignancy in all kinds of gynecologic cancers, and radiomics based image marker is an effective tool for the early-stage prediction of the chemotherapies applied on ovarian cancer patients. This investigation aims to compare and evaluate the predicting performance of the 2D and 3D radiomics features. During the experiment, the tumors were first segmented from the CT slices, based on which a total of 1032 2D radiomics features and 1595 3D radiomics features were extracted. These features are related to tumor shape, density and texture properties. Next, a least absolute shrinkage and selection operator (LASSO) feature selection method was adopted to determine optimal features clusters for 2D and 3D feature pools respectively, which were used as the input of support vector machine (SVM) based prediction models. During the experiment, a total of 99 cases were selected from a previously established dataset at our medical center. The model performance was assessed by receiver operating characteristic (ROC) curve. The results indicated that the 2D and 3D feature based models achieved an area under the curve (AUC) of 0.85±0.03 and 0.89±0.02, while the overall accuracies were 0.76 and 0.81 respectively. These results indicate that the overall performance of the 3D feature is higher than the 2D features. But the sensitivity of the 2D model is higher at some certain specificity range. This study initially reveals the difference between the 2D and 3D features, which should be meaningful for the optimization of the radiomics based clinical decision support tools.
The study aims to develop a novel computer-aided diagnosis (CAD) scheme for mammographic breast mass classification using semi-supervised learning. Although supervised deep learning has achieved huge success across various medical image analysis tasks, its success relies on large amounts of high-quality annotations, which can be challenging to acquire in practice. To overcome this limitation, we propose employing a semi-supervised method, i.e., virtual adversarial training (VAT), to leverage and learn useful information underlying in unlabeled data for better classification of breast masses. Accordingly, our VAT-based models have two types of losses, namely supervised and virtual adversarial losses. The former loss acts as in supervised classification, while the latter loss aims at enhancing the model’s robustness against virtual adversarial perturbation, thus improving model generalizability. To evaluate the performance of our VAT-based CAD scheme, we retrospectively assembled a total of 1024 breast mass images, with equal number of benign and malignant masses. A large CNN and a small CNN were used in this investigation, and both were trained with and without the adversarial loss. When the labeled ratios were 40% and 80%, VAT-based CNNs delivered the highest classification accuracy of 0.740±0.015 and 0.760±0.015, respectively. The experimental results suggest that the VAT-based CAD scheme can effectively utilize meaningful knowledge from unlabeled data to better classify mammographic breast mass images.
The purpose of this study is to develop a novel computer-aided diagnosis (CAD) scheme to facilitate breast mass classification, which is based on the latest transferring generative adversarial networks (GAN) technology. Although GAN is one of the most popular techniques for image augmentation, it requires a relatively large original dataset to achieve satisfactory results, which may not be available for most of the medical imaging tasks. To address this challenge, we developed a novel transferring GAN, which was built based on the deep convolutional generative adversarial networks (DCGAN). This novel model was first pre-trained on a dataset of non-mass mammogram patches. Then the generator and the discriminator were fine-tuned on the mass dataset. A supervised loss was integrated with the discriminator, such that it can be used to directly classify the benign/malignant masses. We retrospectively assembled a total of 25,000 non-mass patches and 1024 mass images to assess this model, using classification accuracy and receiver operating characteristic (ROC) curve. The results demonstrated that our proposed approach improved the accuracy and area under the ROC curve (AUC) by 6.0% and 3.5% respectively, when compared with the classifiers trained without conventional data augmentation. This investigation may provide a new perspective for researchers to effectively train the GAN models on medical imaging tasks with limited datasets.
This study aims to utilize the primary tumor characteristics from CT images to detect lymph node (LN) metastasis for accurately categorizing locally advanced cervical cancer patients (LACC). In clinical practice, LN metastasis is a critical indicator for patients’ prognostic assessment, which is usually investigated by PET/CT (i.e., positron emission tomography/computed tomography) examination. However, the high cost of the PET/CT imaging modality limits its application and also leads to heavy financial burden on patients. Thus it is clinically imperative to develop an economic solution for the LN metastasis identification. For this purpose, a novel image marker was developed, which is based on the primary cervical tumors segmented from CT images. Accordingly, a total of 99 handcrafted features were computed, and an optimal feature set was determined by Laplacian Score (LS) method. Next, a logistic regression model was applied on the optimal feature set to generate a likelihood score for the identification of LN metastasis. Using a retrospective dataset that contains a total of 82 LACC patients, this new model was trained and optimized by leave one out cross validation (LOOCV) strategy. The marker performance was assessed by receiver operator characteristic curve (ROC). The results indicate that the area under the ROC curve (AUC) of this identification model was 0.774±0.050, which demonstrates its strong discriminative power. This study may be able to provide gynecologic oncologists a CT image based low cost clinical marker to identify LN metastasis occurred on LACC patients.
The purpose of this investigation is to verify the feasibility of using deep learning technology to generate an image marker for accurate stratification of cervical cancer patients. For this purpose, a pre-trained deep residual neural network (i.e. ResNet-50) is used as a fixed feature extractor, which is applied to the previously identified cervical tumors depicted on CT images. The features at average pooling layer of the ResNet-50 are collected as initial feature pool. Then discriminant neighborhood embedding (DNE) algorithm is employed to reduce the feature dimension and create an optimal feature cluster. Next, a k-nearest neighbors (k-NN) regression model uses this cluster as input to generate an evaluation score for predicting patient’s response to the planned treatment. In order to assess this new model, we retrospectively assembled the pre-treatment CT images from a number of 97 locally advanced cervical cancer (LACC) patients. The leave one out cross validation (LOOCV) strategy is adopted to train and optimize this new scheme and the receiver operator characteristic curve (ROC) is applied for performance evaluation. The result shows that this new model achieves an area under the ROC curve (AUC) of 0.749 ± 0.064, indicating that the deep neural networks enables to identify the most effective tumor characteristics for therapy response prediction. This investigation initially demonstrates the potential of developing a deep learning based image marker to assist oncologists on categorizing cervical cancer patients for precision treatment.
The objective of this study is to investigate the performance of global and local features to better estimate the characteristics of highly heterogeneous metastatic tumours, for accurately predicting the treatment effectiveness of the advanced stage ovarian cancer patients. In order to achieve this , a quantitative image analysis scheme was developed to estimate a total of 103 features from three different groups including shape and density, Wavelet, and Gray Level Difference Method (GLDM) features. Shape and density features are global features, which are directly applied on the entire target image; wavelet and GLDM features are local features, which are applied on the divided blocks of the target image. To assess the performance, the new scheme was applied on a retrospective dataset containing 120 recurrent and high grade ovary cancer patients. The results indicate that the three best performed features are skewness, root-mean-square (rms) and mean of local GLDM texture, indicating the importance of integrating local features. In addition, the averaged predicting performance are comparable among the three different categories. This investigation concluded that the local features contains at least as copious tumour heterogeneity information as the global features, which may be meaningful on improving the predicting performance of the quantitative image markers for the diagnosis and prognosis of ovary cancer patients.
Predicting metastatic tumor response to chemotherapy at early stage is critically important for improving efficacy of clinical trials of testing new chemotherapy drugs. However, using current response evaluation criteria in solid tumors (RECIST) guidelines only yields a limited accuracy to predict tumor response. In order to address this clinical challenge, we applied Radiomics approach to develop a new quantitative image analysis scheme, aiming to accurately assess the tumor response to new chemotherapy treatment, for the advanced ovarian cancer patients. During the experiment, a retrospective dataset containing 57 patients was assembled, each of which has two sets of CT images: pre-therapy and 4-6 week follow up CT images. A Radiomics based image analysis scheme was then applied on these images, which is composed of three steps. First, the tumors depicted on the CT images were segmented by a hybrid tumor segmentation scheme. Then, a total of 115 features were computed from the segmented tumors, which can be grouped as 1) volume based features; 2) density based features; and 3) wavelet features. Finally, an optimal feature cluster was selected based on the single feature performance and an equal-weighed fusion rule was applied to generate the final predicting score. The results demonstrated that the single feature achieved an area under the receiver operating characteristic curve (AUC) of 0.838±0.053. This investigation demonstrates that the Radiomic approach may have the potential in the development of high accuracy predicting model for early stage prognostic assessment of ovarian cancer patients.
Accurate tumor segmentation is a critical step in the development of the computer-aided detection (CAD) based quantitative image analysis scheme for early stage prognostic evaluation of ovarian cancer patients. The purpose of this investigation is to assess the efficacy of several different methods to segment the metastatic tumors occurred in different organs of ovarian cancer patients. In this study, we developed a segmentation scheme consisting of eight different algorithms, which can be divided into three groups: 1) Region growth based methods; 2) Canny operator based methods; and 3) Partial differential equation (PDE) based methods. A number of 138 tumors acquired from 30 ovarian cancer patients were used to test the performance of these eight segmentation algorithms. The results demonstrate each of the tested tumors can be successfully segmented by at least one of the eight algorithms without the manual boundary correction. Furthermore, modified region growth, classical Canny detector, and fast marching, and threshold level set algorithms are suggested in the future development of the ovarian cancer related CAD schemes. This study may provide meaningful reference for developing novel quantitative image feature analysis scheme to more accurately predict the response of ovarian cancer patients to the chemotherapy at early stage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.