Purpose: Accurate segmentation of treatment planning computed tomography (CT) images is important for radiation therapy (RT) planning. However, low soft tissue contrast in CT makes the segmentation task challenging. We propose a two-step hierarchical convolutional neural network (CNN) segmentation strategy to automatically segment multiple organs from CT.
Approach: The first step generates a coarse segmentation from which organ-specific regions of interest (ROIs) are produced. The second step produces detailed segmentation of each organ. The ROIs are generated using UNet, which automatically identifies the area of each organ and improves computational efficiency by eliminating irrelevant background information. For the fine segmentation step, we combined UNet with a generative adversarial network. The generator is designed as a UNet that is trained to segment organ structures and the discriminator is a fully convolutional network, which distinguishes whether the segmentation is real or generator-predicted, thus improving the segmentation accuracy. We validated the proposed method on male pelvic and head and neck (H&N) CTs used for RT planning of prostate and H&N cancer, respectively. For the pelvic structure segmentation, the network was trained to segment the prostate, bladder, and rectum. For H&N, the network was trained to segment the parotid glands (PG) and submandibular glands (SMG).
Results: The trained segmentation networks were tested on 15 pelvic and 20 H&N independent datasets. The H&N segmentation network was also tested on a public domain dataset (N = 38) and showed similar performance. The average dice similarity coefficients (mean ± SD) of pelvic structures are 0.91 ± 0.05 (prostate), 0.95 ± 0.06 (bladder), 0.90 ± 0.09 (rectum), and H&N structures are 0.87 ± 0.04 (PG) and 0.86 ± 0.05 (SMG). The segmentation for each CT takes <10 s on average.
Conclusions: Experimental results demonstrate that the proposed method can produce fast, accurate, and reproducible segmentation of multiple organs of different sizes and shapes and show its potential to be applicable to different disease sites.
Accurate segmentation of organs-at-risk is important in prostate cancer radiation therapy planning. However, poor soft tissue contrast in CT makes the segmentation task very challenging. We propose a deep convolutional neural network approach to automatically segment the prostate, bladder, and rectum from pelvic CT. A hierarchical coarse-to-fine segmentation strategy is used where the first step generates a coarse segmentation from which an organ-specific region of interest (ROI) localization map is produced. The second step produces detailed and accurate segmentation of the organs. The ROI localization map is generated using a 3D U-net. The localization map helps adjusting the ROI of each organ that needs to be segmented and hence improves computational efficiency by eliminating irrelevant background information. For the fine segmentation step, we designed a fully convolutional network (FCN) by combining a generative adversarial network (GAN) with a U-net. Specifically, the generator is a 3D U-net that is trained to predict individual pelvic structures, and the discriminator is an FCN which fine-tunes the generator predicted segmentation map by comparing it with the ground truth. The network was trained using 100 CT datasets and tested on 15 datasets to segment the prostate, bladder and rectum. The average Dice similarity (mean±SD) of the prostate, bladder and rectum are 0.90±0.05, 0.96±0.06 and 0.91±0.09, respectively, and Hausdorff distances of these three structures are 5.21±1.17, 4.37±0.56 and 6.11±1.47 (mm), respectively. The proposed method produces accurate and reproducible segmentation of pelvic structures, which can be potentially valuable for prostate cancer radiotherapy treatment planning.
We propose a deformable registration algorithm for prostate-specific membrane antigen (PSMA) PET/CT and transrectal ultrasound (TRUS) fusion. Accurate registration of PSMA PET to intraoperative TRUS will allow physicians to customize dose planning based on the regions involved. The inputs to the registration algorithm are the PET/CT and TRUS volumes as well as the prostate segmentations. PET/CT and TRUS volumes are first rigidly registered by maximizing the overlap between the segmented prostate binary masks. Three-dimensional anatomical landmarks are then automatically extracted from the boundary as well as within the prostate. Then, a deformable registration is performed using a regularized thin plate spline where the landmark localization error is optimized between the extracted landmarks that are in correspondence. The proposed algorithm was evaluated on 25 prostate cancer patients treated with low-dose-rate brachytherapy. We registered the postimplant CT to TRUS using the proposed algorithm and computed target registration errors (TREs) by comparing implanted seed locations. Our approach outperforms state-of-the-art methods, with significantly lower (mean ± standard deviation) TRE of 1.96 ± 1.29 mm while being computationally efficient (mean computation time of 38 s). The proposed landmark-based PET/CT-TRUS deformable registration algorithm is simple, computationally efficient, and capable of producing quality registration of the prostate boundary as well as the internal gland.
In this paper, a deformable registration method is proposed that enables automatic alignment of preoperative PET/CT to intraoperative ultrasound in order to achieve PET-determined focal prostate brachytherapy. Novel PET imaging agents such as prostate specific membrane antigen (PSMA) enables highly accurate identification of intra/extra-prostatic tumors. Incorporation of PSMA PET into the standard transrectal ultrasound (TRUS)-guided prostate brachytherapy will enable focal therapy, thus minimizing radiation toxicities. Our registration method requires PET/CT and TRUS volume as well as prostate segmentations. These input volumes are first rigidly registered by maximizing spatial overlap between the segmented prostate volumes, followed by the deformable registration. To achieve anatomically accurate deformable registration, we extract anatomical landmarks from both prostate boundary and inside the gland. Landmarks are extracted along the base-apex axes using two approaches: equiangular and equidistance. Three-dimensional thin-plate spline (TPS)-based deformable registration is then performed using the extracted landmarks as control points. Finally, the PET/CT images are deformed to the TRUS space by using the computed TPS transformation. The proposed method was validated on 10 prostate cancer patient datasets in which we registered post-implant CT to end-of-implantation TRUS. We computed target registration errors (TREs) by comparing the implanted seed positions (transformed CT seeds vs. intraoperatively identified TRUS seeds). The average TREs of the proposed method are 1.98±1.22 mm (mean±standard deviation) and 1.97±1.24 mm for equiangular and equidistance landmark extraction methods, respectively, which is better than or comparable to existing state-of-the-art methods while being computationally more efficient with an average computation time less than 40 seconds.
This paper presents a methodology for the digital formatting of a printed atlas of the brainstem and the delineation of cranial nerves from this digital atlas. It also describes on-going work on the 3D resampling and refinement of the 2D functional regions and nerve contours. In MRI-based anatomical modeling for neurosurgery planning and simulation, the complexity of the functional anatomy entails a digital atlas approach, rather than less descriptive voxel or surface-based approaches. However, there is an insufficiency of descriptive digital atlases, in particular of the brainstem. Our approach proceeds from a series of numbered, contour-based sketches coinciding with slices of the brainstem featuring both closed and open contours. The closed contours coincide with functionally relevant regions, whereby our objective is to fill in each corresponding label, which is analogous to painting numbered regions in a paint-by-numbers kit. Any open contour typically coincides with a cranial nerve. This 2D phase is needed in order to produce densely labeled regions that can be stacked to produce 3D regions, as well as identifying the embedded paths and outer attachment points of cranial nerves. Cranial nerves are modeled using an explicit contour based technique called 1-Simplex. The relevance of cranial nerves modeling of this project is two-fold: i) this atlas will fill a void left by the brain segmentation communities, as no suitable digital atlas of the brainstem exists, and ii) this atlas is necessary to make explicit the attachment points of major nerves (except I and II) having a cranial origin. Keywords: digital atlas, contour models, surface models
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.