KEYWORDS: Image segmentation, Computed tomography, Angiography, 3D acquisition, 3D image processing, Medical imaging, Magnetic resonance imaging, X-ray computed tomography, Binary data, Medical physics
We propose a learning-based method to automatically segment arteriovenous malformations (AVM) target volume from computed tomography (CT) in stereotactic radiosurgery (SRS). A deeply supervised 3D V-Net is introduced to enable end-to-end segmentation. Deep supervision mechanism is integrated into the hidden layers to overcome the optimization difficulties when training such a network with limited training data. The probability map of new AVM contour is generated by the well-trained network. To evaluate the proposed method, we retrospectively investigate 30 AVM patients treated by SRS. For each patient, both digital subtraction angiography (DSA) and CT with contrast had been acquired. Using our proposed method, the AVM contours are generated solely based on contrast CT images, and are compared with the AVM contours delineated from DSA by physicians as ground truth. The average centroid distance, volume difference and DSC value among all 30 patients are 0.83±0.91mm, -0.01±0.79 and 0.84±0.09, which indicates that the proposed method is able to generate AVM target contour with around 1mm error in displacement, 1cc error in volume size and 84% overlapping compared with ground truth. The proposed method has great potential in eliminating DSA acquisition and developing a solely CT-based treatment planning workflow for AVM SRS treatment.
We propose a learning method to generate corrected CBCT (CCBCT) images with the goal of improving the image quality and clinical utility of on-board CBCT. The proposed method integrated a residual block concept into a cyclegenerative adversarial network (cycle-GAN) framework, which is named as Res-cycle GAN in this study. Compared with a GAN, a cycle-GAN includes an inverse transformation from CBCT to CT images, which could further constrain the learning model. A fully convolution neural network (FCN) with residual block is used in generator to enable end-toend transformation. A FCN is used in discriminator to discriminate from planning CT (ground truth) and correction CBCT (CCBCT) generated by the generator. This proposed algorithm was evaluated using 12 sets of patient data with CBCT and CT images. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), normalized cross correlation (NCC) indexes and spatial non-uniformity (SNU) in the selected regions of interests (ROIs) were used to quantify the correction accuracy of the proposed algorithm. Overall, the MAE, PSNR, NCC and SNU were 20.8±3.4 HU, 32. 8±1.5 dB, 0.986±0.004 and 1.7±3.6%. We have developed a novel deep learning-based method to generate CCBCT with a high image quality. The proposed method increases on-board CBCT image quality, making it comparable to that of the planning CT. With further evaluation and clinical implementation, this method could lead to quantitative adaptive radiotherapy.
We propose a method to generate patient-specific pseudo CT (pCT) from routinely-acquired MRI based on semantic information-based random forest and auto-context refinement. Auto-context model with patch-based anatomical features are integrated into classification forest to generate and improve semantic information. The concatenate of semantic information with anatomical features are then used to train a series of regression forests based on auto-context model. The pCT of new arrival MRI is generated by extracting anatomical features and feeding them into the well-trained classification and regression forests for pCT prediction. This proposed algorithm was evaluated using 11 patients’ data with brain MR and CT images. The mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and normalized cross correlation (NCC) are 57.45±8.45 HU, 28.33±1.68 dB, and 0.97±0.01. The Dice similarity coefficient (DSC) for air, soft-tissue and bone are 97.79±0.76%, 93.32±2.35% and 84.49±5.50%, respectively. We have developed a novel machine-learning-based method to generate patient-specific pCT from routine anatomical MRI for MRI-only radiotherapy treatment planning. This pseudo CT generation technique could be a useful tool for MRI-based radiation treatment planning and MRI-based PET attenuation correction of PET/MRI scanner.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.