Magnetic resonance imaging (MRI) is useful for the detection of abnormalities affecting maternal and fetal health. In this study, we used a fully convolutional neural network for simultaneous segmentation of the uterine cavity and placenta on MR images. We trained the network with MR images of 181 patients, with 157 for training and 24 for validation. The segmentation performance of the algorithm was evaluated using MR images of 60 additional patients that were not involved in training. The average Dice similarity coefficients achieved for the uterine cavity and placenta were 92% and 80%, respectively. The algorithm could estimate the volume of the uterine cavity and placenta with average errors of less than 1.1% compared to manual estimations. Automated segmentation, when incorporated into clinical use, has the potential to quantify, standardize, and improve placental assessment, resulting in improved outcomes for mothers and fetuses.
In women with placenta accreta spectrum (PAS), patient management may involve cesarean hysterectomy at delivery. Magnetic resonance imaging (MRI) has been used for further evaluation of PAS and surgical planning. This work tackles two prediction problems: predicting presence of PAS and predicting hysterectomy using MR images of pregnant patients. First, we extracted approximately 2,500 radiomic features from MR images with two regions of interest: the placenta and the uterus. In addition to analyzing two regions of interest, we dilated the placenta and uterus masks by 5, 10, 15, and 20 mm to gain insights from the myometrium, where the uterus and placenta overlap in the case of PAS. This study cohort includes 241 pregnant women. Of these women, 89 underwent hysterectomy while 152 did not; 141 with suspected PAS, and 100 without suspected PAS. We obtained an accuracy of 0.88 for predicting hysterectomy and an accuracy of 0.92 for classifying suspected PAS. The radiomic analysis tool is further validated, it can be useful for aiding clinicians in decision making on the care of pregnant women.
In severe cases, placenta accreta spectrum (PAS) requires emergency hysterectomy, endangering the life of both mother and fetus. Early prediction may reduce complications and aid in management decisions in these high-risk pregnancies. In this work, we developed a novel convolutional network architecture to combine MRI volumes, radiomic features, and custom feature maps to predict PAS severe enough to result in hysterectomy after fetal delivery in pregnant women. We trained, optimized, and evaluated the networks using data from 241 patients, in groups of 157, 24, and 60 for training, validation, and testing, respectively. We found the network using all three paths produced the best performance, with an AUC of 87.8, accuracy 83.3%,sensitivity of 85.0, and specificity of 82.5. This deep learning algorithm, deployed in clinical settings, may identify women at risk before birth, resulting in improved patient outcomes.
Given the prevalence of cardiovascular diseases (CVDs), the segmentation of the heart on cardiac computed tomography (CT) remains of great importance. Manual segmentation is time-consuming and intra-and inter-observer variabilities yield inconsistent and inaccurate results. Computer-assisted, and in particular, deep learning approaches to segmentation continue to potentially offer an accurate, efficient alternative to manual segmentation. However, fully automated methods for cardiac segmentation have yet to achieve accurate enough results to compete with expert segmentation. Thus, we focus on a semi-automated deep learning approach to cardiac segmentation that bridges the divide between a higher accuracy from manual segmentation and higher efficiency from fully automated methods. In this approach, we selected a fixed number of points along the surface of the cardiac region to mimic user interaction. Points-distance maps were then generated from these points selections, and a three-dimensional (3D) fully convolutional neural network (FCNN) was trained using points-distance maps to provide a segmentation prediction. Testing our method with different numbers of selected points, we achieved a Dice score from 0.742 to 0.917 across the four chambers. Specifically. Dice scores averaged 0.846 ± 0.059, 0.857 ± 0.052, 0.826 ± 0.062, and 0.824 ± 0.062 for the left atrium, left ventricle, right atrium, and right ventricle, respectively across all points selections. This point-guided, image-independent, deep learning segmentation approach illustrated a promising performance for chamber-by-chamber delineation of the heart in CT images.
Phantoms are invaluable tools broadly used for research and training purposes designed to mimic tissues and structures in the body. In this paper, polyvinyl chloride (PVC)-plasticizer and silicone rubbers were explored as economical materials to reliably create long-lasting, realistic kidney phantoms with contrast under both ultrasound (US) and X-ray imaging. The radiodensity properties of varying formulations of soft PVC-based gels were characterized to allow adjustable image intensity and contrast. Using this data, a phantom creation workflow was established which can be easily adapted to match radiodensity values of other organs and soft tissues in the body. Internal kidney structures such as the medulla and ureter were created using a two-part molding process to allow greater phantom customization. The kidney phantoms were imaged under US and X-ray scanners to compare the contrast enhancement of a PVC-based medulla versus a silicone-based medulla. Silicone was found to have higher attenuation than plastic under X-ray imaging, but poor quality under US imaging. PVC was found to exhibit good contrast under X-ray imaging and excellent performance for US imaging. Finally, the durability and shelf life of our PVC-based phantoms were observed to be vastly superior to that of common agar-based phantoms. The work presented here allows extended periods of usage and storage for each kidney phantom while simultaneously preserving anatomical detail, contrast under dual-modality imaging, and low cost of materials.
Ultrasound-guided biopsy is widely used for disease detection and diagnosis. We plan to register preoperative imaging, such as positron emission tomography / computed tomography (PET/CT) and/or magnetic resonance imaging (MRI), with real-time intraoperative ultrasound imaging for improved localization of suspicious lesions that may not be seen on ultrasound but visible on other imaging modalities. Once the image registration is completed, we will combine the images from two or more imaging modalities and use Microsoft HoloLens 2 augmented reality (AR) headset to display three-dimensional (3D) segmented lesions and organs from previously acquired images and real-time ultrasound images. In this work, we are developing a multi-modal, 3D augmented reality system for the potential use in ultrasound-guided prostate biopsy. Preliminary results demonstrate the feasibility of combining images from multiple modalities into an AR-guided system.
Hyperspectral endoscopy can offer multiple advantages as compared to conventional endoscopy. Our goal is to design and develop a real-time hyperspectral endoscopic imaging system for the diagnosis of gastrointestinal (GI) tract cancers using a micro-LED array as an in-situ illumination source. The wavelengths of the system range from ultraviolet to visible and near infrared. To evaluate the use of the LED array for hyperspectral imaging, we designed a prototype system and conducted ex vivo experiments using normal and cancerous tissues of mice, chicken, and sheep. We compared the results of our LED-based approach with our reference hyperspectral camera system. The results confirm the similarity between the LED-based hyperspectral imaging system and the reference HSI camera. Our LED-based hyperspectral imaging system can be used not only as an endoscope but also as a laparoscopic or handheld devices for cancer detection and surgery.
KEYWORDS: Image segmentation, Magnetic resonance imaging, Uterus, 3D image processing, 3D modeling, Data modeling, Solids, Image processing algorithms and systems, Fetus, Convolutional neural networks
Purpose: Magnetic resonance imaging has been recently used to examine the abnormalities of the placenta during pregnancy. Segmentation of the placenta and uterine cavity allows quantitative measures and further analyses of the organs. The objective of this study is to develop a segmentation method with minimal user interaction.
Approach: We developed a fully convolutional neural network (CNN) for simultaneous segmentation of the uterine cavity and placenta in three dimensions (3D) while a minimal operator interaction was incorporated for training and testing of the network. The user interaction guided the network to localize the placenta more accurately. In the experiments, we trained two CNNs, one using 70 normal training cases and the other using 129 training cases including normal cases as well as cases with suspected placenta accreta spectrum (PAS). We evaluated the performance of the segmentation algorithms on two test sets: one with 20 normal cases and the other with 50 images from both normal women and women with suspected PAS.
Results: For the normal test data, the average Dice similarity coefficient (DSC) was 92% and 82% for the uterine cavity and placenta, respectively. For the combination of normal and abnormal cases, the DSC was 88% and 83% for the uterine cavity and placenta, respectively. The 3D segmentation algorithm estimated the volume of the normal and abnormal uterine cavity and placenta with average volume estimation errors of 4% and 9%, respectively.
Conclusions: The deep learning-based segmentation method provides a useful tool for volume estimation and analysis of the placenta and uterus cavity in human placental imaging.
We designed a compact, real-time LED-based endoscopic imaging system for the detection of various diseases including cancer. In gastrointestinal applications, conventional endoscopy cannot reliably differentiate tumor from normal tissue. Current hyperspectral imaging systems are too slow to be used for real-time endoscopic applications. We are investigating real-time spectral imaging for different tissue types. Our objective is to develop a catheter for real-time hyperspectral gastrointestinal endoscopy. The endoscope uses multiple wavelengths within UV, visible, and IR light spectra generated by a micro-LED array. We capture images with a monochrome micro camera, which is cost-effective and smaller than the current hyperspectral imagers. A wireless transceiver sends the captured images to a workstation for further processing, such as tumor detection. The spatial resolution of the system is defined by camera resolution and the distance to the object, while the number of LEDs in the multi-wavelength light source determines the spectral resolution. To investigate the properties and the limitations of our high-speed spectral imaging approach, we designed a prototype system. We conducted two experiments to measure the optimal forward voltages and lighting duration of the LEDs. These factors affect the maximum feasible imaging rate and resolution. The lighting duration of each LED can be shorter than 10 ms while producing an image with a high signal-to-noise ratio and no illumination interference. These results support the idea of using a high-speed camera and an LED-array for real-time hyperspectral endoscopic imaging.
Cardiac catheterization is a delicate strategy often used during various heart procedures. However, the procedure carries a myriad of risks associated with it, including damage to the vessel or heart itself, blood clots, and arrhythmias. Many of these risks increase in probability as the length of the operation increases, creating a demand for a more accurate procedure while reducing the overall time required. To this end, we developed an adaptable virtual reality simulation and visualization method to provide essential information to the physician ahead of time with the goal of reducing potential risks, decreasing operation time, and improving the accuracy of cardiac catheterization procedures. We additionally conducted a phantom study to evaluate the impact of using our virtual reality system prior to a procedure
Surgery is a major treatment method for squamous cell carcinoma (SCC). During surgery, insufficient tumor margin may lead to local recurrence of cancer. Hyperspectral imaging (HSI) is a promising optical imaging technique for in vivo cancer detection and tumor margin assessment. In this study, a fully convolutional network (FCN) was implemented for tumor detection and margin assessment in hyperspectral images of SCC. The FCN was trained and tested with hyperspectral images of 25 ex vivo SCC surgical specimens from 20 different patients. The network was evaluated per patient and achieved pixel-level tissue classification with an average AUC of 0.88, 0.83 accuracy, 0.84 sensitivity and 0.70 specificity. The 95% Hausdorff distance of assessed tumor margin in 17 patients was less than 2 mm, and the classification time of each tissue specimen took less than 10 seconds. The proposed method potentially facilitates intraoperative tumor margin assessment and improves surgical outcomes.
A Deep-Learning (DL) based segmentation tool was applied to a new magnetic resonance imaging dataset of pregnant women with suspected Placenta Accreta Spectrum (PAS). Radiomic features from DL segmentation were compared to those from expert manual segmentation via intraclass correlation coefficients (ICC) to assess reproducibility. An additional imaging marker quantifying the placental location within the uterus (PLU) was included. Features with an ICC < 0.7 were used to build logistic regression models to predict hysterectomy. Of 2059 features, 781 (37.9%) had ICC <0.7. AUC was 0.69 (95% CI 0.63-0.74) for manually segmented data and 0.78 (95% CI 0.73-0.83) for DL segmented data.
Accurate segmentation of the prostate on computed tomography (CT) has many diagnostic and therapeutic applications. However, manual segmentation is time-consuming and suffers from high inter- and intra-observer variability. Computerassisted approaches are useful to speed up the process and increase the reproducibility of the segmentation. Deep learningbased segmentation methods have shown potential for quick and accurate segmentation of the prostate on CT images. However, difficulties in obtaining manual, expert segmentations on a large quantity of images limit further progress. Thus, we proposed an approach to train a base model on a small, manually-labeled dataset and fine-tuned the model using unannotated images from a large dataset without any manual segmentation. The datasets used for pre-training and finetuning the base model have been acquired in different centers with different CT scanners and imaging parameters. Our fine-tuning method increased the validation and testing Dice scores. A paired, two-tailed t-test shows a significant change in test score (p = 0.017) demonstrating that unannotated images can be used to increase the performance of automated segmentation models.
Guided biopsy of soft tissue lesions can be challenging in the presence of sensitive organs or when the lesion itself is small. Computed tomography (CT) is the most frequently used modality to target soft tissue lesions. In order to aid physicians, small field of view (FOV) low dose non-contrast CT volumes are acquired prior to intervention while the patient is on the procedure table to localize the lesion and plan the best approach. However, patient motion between the end of the scan and the start of the biopsy procedure can make it difficult for a physician to translate the lesion location from the CT onto the patient body, especially for a deep-seated lesion. In addition, the needle should be managed well in three-dimensional trajectories in order to reach the lesion and avoid vital structures. This is especially challenging for less experienced interventionists. These usually result in multiple additional image acquisitions during the course of procedure to ensure accurate needle placement, especially when multiple core biopsies are required. In this work, we present an augmented reality (AR)-guided biopsy system and procedure for soft tissue and lung lesions and quantify the results using a phantom study. We found an average error of 0.75 cm from the center of the lesion when AR guidance was used, compared to an error of 1.52 cm from the center of the lesion during unguided biopsy for soft tissue lesions while upon testing the system on lung lesions, an average error of 0.62 cm from the center of the tumor while using AR guidance versus a 1.12 cm error while relying on unguided biopsies. The AR-guided system is able to improve the accuracy and could be useful in the clinical application.
Squamous cell carcinoma (SCC) comprises over 90 percent of tumors in the head and neck. The diagnosis process involves performing surgical resection of tissue and creating histological slides from the removed tissue. Pathologists detect SCC in histology slides, and may fail to correctly identify tumor regions within the slides. In this study, a dataset of patches extracted from 200 digitized histological images from 84 head and neck SCC patients was used to train, validate and test the segmentation performance of a fully-convolutional U-Net architecture. The neural network achieved a pixel-level segmentation AUC of 0.89 on the testing group. The average segmentation time for whole slide images was 72 seconds. The training, validation, and testing process in this experiment produces a model that has the potential to help segment SCC images in histological images with improved speed and accuracy compared to the manual segmentation process performed by pathologists.
Wearable augmented reality (AR) is an emerging technology with enormous potential for use in the medical field, from training and procedure simulations to image-guided surgery. Medical AR seeks to enable surgeons to see tissue segmentations in real time. With the objective of achieving real-time guidance, the emphasis on speed produces the need for a fast method for imaging and classification. Hyperspectral imaging (HSI) is a non-contact, optical imaging modality that rapidly acquires hundreds of images of tissue at different wavelengths, which can be used to generate spectral data of the tissue. Combining HSI information and machine-learning algorithms allows for effective tissue classification. In this paper, we constructed a brain tissue phantom with porcine blood, yellow-dyed gelatin, and colorless gelatin to represent blood vessels, tumor, and normal brain tissue, respectively. Using a segmentation algorithm, hundreds of hyperspectral images were compiled to classify each of the pixels. Three segmentation labels were generated from the data, each with a different type of tissue. Our system virtually superimposes the HSI channels and segmentation labels of a brain tumor phantom onto the real scene using the HoloLens AR headset. The user can manipulate and interact with the segmentation labels and HSI channels by repositioning, rotating, changing visibility, and switching between them. All actions can be performed through either hand or voice controls. This creates a convenient and multifaceted visualization of brain tissue in real time with minimal user restrictions. We demonstrate the feasibility of a fast and practical HIS-AR technique for potential use of image-guided brain surgery.
We developed a reliable and repeatable process to create hyper-realistic, kidney phantoms with tunable image visibility under ultrasound (US) and CT imaging modalities. A methodology was defined to create phantoms that could be produced for renal biopsy evaluation. The final complex kidney phantom was devised containing critical structures of a kidney: kidney cortex, medulla, and ureter. Simultaneously, some lesions were integrated into the phantom to mimic the presence of tumors during biopsy. The phantoms were created and scanned by ultrasound and CT scanners to verify the visibility of the complex internal structures and to observe the interactions between material properties. The result was a successful advancement in knowledge of materials with ideal acoustic and impedance properties to replicate human organs for the field of image-guided interventions.
KEYWORDS: Image segmentation, Magnetic resonance imaging, Uterus, 3D image processing, Convolutional neural networks, Fetus, 3D modeling, Image processing algorithms and systems
Segmentation of the uterine cavity and placenta in fetal magnetic resonance (MR) imaging is useful for the detection of abnormalities that affect maternal and fetal health. In this study, we used a fully convolutional neural network for 3D segmentation of the uterine cavity and placenta while a minimal operator interaction was incorporated for training and testing the network. The user interaction guided the network to localize the placenta more accurately. We trained the network with 70 training and 10 validation MRI cases and evaluated the algorithm segmentation performance using 20 cases. The average Dice similarity coefficient was 92% and 82% for the uterine cavity and placenta, respectively. The algorithm could estimate the volume of the uterine cavity and placenta with average errors of 2% and 9%, respectively. The results demonstrate that the deep learning-based segmentation and volume estimation is possible and can potentially be useful for clinical applications of human placental imaging.
Kidney biopsies are currently performed using preoperative imaging to identify the lesion of interest and intraoperative imaging used to guide the biopsy needle to the tissue of interest. Often, these are not the same modalities forcing the physician to perform a mental cross-modality fusion of the preoperative and intraoperative scans. This limits the accuracy and reproducibility of the biopsy procedure. In this study, we developed an augmented reality system to display holographic representations of lesions superimposed on a phantom. This system allows the integration of preoperative CT scans with intraoperative ultrasound scans to better determine the lesion’s real-time location. An automated deformable registration algorithm was used to increase the accuracy of the holographic lesion locations, and a magnetic tracking system was developed to provide guidance for the biopsy procedure. Our method achieved a targeting accuracy of 2.9 ± 1.5 mm in a renal phantom study.
Computer-assisted image segmentation techniques could help clinicians to perform the border delineation task faster with lower inter-observer variability. Recently, convolutional neural networks (CNNs) are widely used for automatic image segmentation. In this study, we used a technique to involve observer inputs for supervising CNNs to improve the accuracy of the segmentation performance. We added a set of sparse surface points as an additional input to supervise the CNNs for more accurate image segmentation. We tested our technique by applying minimal interactions to supervise the networks for segmentation of the prostate on magnetic resonance images. We used U-Net and a new network architecture that was based on U-Net (dual-input path [DIP] U-Net), and showed that our supervising technique could significantly increase the segmentation accuracy of both networks as compared to fully automatic segmentation using U-Net. We also showed DIP U-Net outperformed U-Net for supervised image segmentation. We compared our results to the measured inter-expert observer difference in manual segmentation. This comparison suggests that applying about 15 to 20 selected surface points can achieve a performance comparable to manual segmentation.
KEYWORDS: Image segmentation, Prostate, Computed tomography, 3D modeling, 3D image processing, Image processing algorithms and systems, Performance modeling, Data modeling, Statistical modeling, Algorithm development
Segmentation of the prostate in computed tomography (CT) is used for planning and guidance of prostate treatment procedures. However, due to the low soft-tissue contrast of the images, manual delineation of the prostate on CT is a time-consuming task with high interobserver variability. We developed an automatic, three-dimensional (3-D) prostate segmentation algorithm based on a customized U-Net architecture. Our dataset contained 92 3-D abdominal CT scans from 92 patients, of which 69 images were used for training and validation and the remaining for testing the convolutional neural network model. Compared to manual segmentation by an expert radiologist, our method achieved 83 % ± 6 % for Dice similarity coefficient (DSC), 2.3 ± 0.6 mm for mean absolute distance (MAD), and 1.9 ± 4.0 cm3 for signed volume difference (ΔV). The average recorded interexpert difference measured on the same test dataset was 92% (DSC), 1.1 mm (MAD), and 2.1 cm3 (ΔV). The proposed algorithm is fast, accurate, and robust for 3-D segmentation of the prostate on CT images.
Primary management for head and neck squamous cell carcinoma (SCC) involves surgical resection with negative cancer margins. Pathologists guide surgeons during these operations by detecting SCC in histology slides made from the excised tissue. In this study, 192 digitized histological images from 84 head and neck SCC patients were used to train, validate, and test an inception-v4 convolutional neural network. The proposed method performs with an AUC of 0.91 and 0.92 for the validation and testing group. The careful experimental design yields a robust method with potential to help create a tool to increase efficiency and accuracy of pathologists for detecting SCC in histological images.
Segmentation of the prostate in magnetic resonance (MR) images has many applications in image-guided treatment planning and procedures such as biopsy and focal therapy. However, manual delineation of the prostate boundary is a time-consuming task with high inter-observer variation. In this study, we proposed a semiautomated, three-dimensional (3D) prostate segmentation technique for T2-weighted MR images based on shape and texture analysis. The prostate gland shape is usually globular with a smoothly curved surface that could be accurately modeled and reconstructed if the locations of a limited number of well-distributed surface points are known. For a training image set, we used an inter-subject correspondence between the prostate surface points to model the prostate shape variation based on a statistical point distribution modeling. We also studied the local texture difference between prostate and non-prostate tissues close to the prostate surface. To segment a new image, we used the learned prostate shape and texture characteristics to search for the prostate border close to an initially estimated prostate surface. We used 23 MR images for training, and 14 images for testing the algorithm performance. We compared the results to two sets of experts’ manual reference segmentations. The measured mean ± standard deviation of error values for the whole gland were 1.4 ± 0.4 mm, 8.5 ± 2.0 mm, and 86 ± 3% in terms of mean absolute distance (MAD), Hausdorff distance (HDist), and Dice similarity coefficient (DSC). The average measured differences between the two experts on the same datasets were 1.5 mm (MAD), 9.0 mm (HDist), and 83% (DSC). The proposed algorithm illustrated a fast, accurate, and robust performance for 3D prostate segmentation. The accuracy of the algorithm is within the inter-expert variability observed in manual segmentation and comparable to the best performance results reported in the literature.
Prostate segmentation in computed tomography (CT) images is useful for planning and guidance of the diagnostic and therapeutic procedures. However, the low soft-tissue contrast of CT images makes the manual prostate segmentation a time-consuming task with high inter-observer variation. We developed a semi-automatic, three-dimensional (3D) prostate segmentation algorithm using shape and texture analysis and have evaluated the method against manual reference segmentations. In a training data set we defined an inter-subject correspondence between surface points in the spherical coordinate system. We applied this correspondence to model the globular and smoothly curved shape of the prostate with 86, well-distributed surface points using a point distribution model that captures prostate shape variation. We also studied the local texture difference between prostate and non-prostate tissues close to the prostate surface. For segmentation, we used the learned shape and texture characteristics of the prostate in CT images and we used a set of user inputs for prostate localization. We trained our algorithm using 23 CT images and tested it on 10 images. We evaluated the results compared with those of two experts’ manual reference segmentations using different error metrics. The average measured Dice similarity coefficient (DSC) and mean absolute distance (MAD) were 88 ± 2% and 1.9 ± 0.5 mm, respectively. The averaged inter-expert difference measured on the same dataset was 91 ± 4% (DSC) and 1.3 ± 0.6 mm (MAD). With no prior intra-patient information, the proposed algorithm showed a fast, robust and accurate performance for 3D CT segmentation.
KEYWORDS: Image segmentation, Prostate, Magnetic resonance imaging, Error analysis, Bladder, Image processing algorithms and systems, Principal component analysis, 3D modeling, Cancer, Prostate cancer
Prostate segmentation on T2w MRI is important for several diagnostic and therapeutic procedures for prostate cancer. Manual segmentation is time-consuming, labor-intensive, and subject to high interobserver variability. This study investigated the suitability of computer-assisted segmentation algorithms for clinical translation, based on measurements of interoperator variability and measurements of the editing time required to yield clinically acceptable segmentations. A multioperator pilot study was performed under three pre- and postediting conditions: manual, semiautomatic, and automatic segmentation. We recorded the required editing time for each segmentation and measured the editing magnitude based on five different spatial metrics. We recorded average editing times of 213, 328, and 393 s for manual, semiautomatic, and automatic segmentation respectively, while an average fully manual segmentation time of 564 s was recorded. The reduced measured postediting interoperator variability of semiautomatic and automatic segmentations compared to the manual approach indicates the potential of computer-assisted segmentation for generating a clinically acceptable segmentation faster with higher consistency. The lack of strong correlation between editing time and the values of typically used error metrics (ρ<0.5) implies that the necessary postsegmentation editing time needs to be measured directly in order to evaluate an algorithm’s suitability for clinical translation.
Measurement of prostate tumour volume can inform prognosis and treatment selection, including an assessment of the
suitability and feasibility of focal therapy, which can potentially spare patients the deleterious side effects of radical
treatment. Prostate biopsy is the clinical standard for diagnosis but provides limited information regarding tumour
volume due to sparse tissue sampling. A non-invasive means for accurate determination of tumour burden could be of
clinical value and an important step toward reduction of overtreatment. Multi-parametric magnetic resonance imaging
(MPMRI) is showing promise for prostate cancer diagnosis. However, the accuracy and inter-observer variability of
prostate tumour volume estimation based on separate expert contouring of T2-weighted (T2W), dynamic contrastenhanced
(DCE), and diffusion-weighted (DW) MRI sequences acquired using an endorectal coil at 3T is currently
unknown. We investigated this question using a histologic reference standard based on a highly accurate MPMRIhistology
image registration and a smooth interpolation of planimetric tumour measurements on histology. Our results
showed that prostate tumour volumes estimated based on MPMRI consistently overestimated histological reference
tumour volumes. The variability of tumour volume estimates across the different pulse sequences exceeded interobserver
variability within any sequence. Tumour volume estimates on DCE MRI provided the lowest inter-observer
variability and the highest correlation with histology tumour volumes, whereas the apparent diffusion coefficient (ADC)
maps provided the lowest volume estimation error. If validated on a larger data set, the observed correlations could
support the development of automated prostate tumour volume segmentation algorithms as well as correction schemes
for tumour burden estimation on MPMRI.
Accurate pathology assessment of post-prostatectomy specimens is important to determine the need for and to guide
potentially life-saving adjuvant therapy. Digital pathology imaging is enabling a transition to a more objective quantification of some surgical pathology assessments, such as tumour volume, that are currently visually estimated by
pathologists and subject to inter-observer variability. One challenge for tumour volume quantification is the traditional 3–5 mm spacing of images acquired from sections of radical prostatectomy specimens. Tumour volume estimates may benefit from a well-motivated approach to inter-slide tumour boundary interpolation. We implemented and tested a level set-based interpolation method and found that it produced 3D tumour surfaces that may be more biologically plausible than those produced via a simpler nearest-slide interpolation. We found that the simpler method produced larger tumour volumes, compared to the level set method, by a median factor of 2.3. For contexts where only tumour volume is of interest, we determined that the volumes produced via the simpler method can be linearly adjusted to the level setproduced volumes. The smoother surfaces from level set interpolation yielded measurable differences in tumour boundary location; this may be important in several clinical/research contexts (e.g. pathology-based imaging validation for focal therapy planning).
KEYWORDS: Image segmentation, Prostate, Magnetic resonance imaging, 3D image processing, 3D modeling, Cancer, Medical imaging, Data modeling, Magnetism, Shape analysis
3D segmentation of the prostate in medical images is useful to prostate cancer diagnosis and therapy guidance, but is time-consuming to perform manually. Clinical translation of computer-assisted segmentation algorithms for this purpose requires a comprehensive and complementary set of evaluation metrics that are informative to the clinical end user. We have developed an interactive 3D prostate segmentation method for 1.5T and 3.0T T2-weighted magnetic resonance imaging (T2W MRI) acquired using an endorectal coil. We evaluated our method against manual segmentations of 36 3D images using complementary boundary-based (mean absolute distance; MAD), regional overlap (Dice similarity coefficient; DSC) and volume difference (ΔV) metrics. Our technique is based on inter-subject prostate shape and local boundary appearance similarity. In the training phase, we calculated a point distribution model (PDM) and a set of local mean intensity patches centered on the prostate border to capture shape and appearance variability. To segment an unseen image, we defined a set of rays – one corresponding to each of the mean intensity patches computed in training – emanating from the prostate centre. We used a radial-based search strategy and translated each mean intensity patch along its corresponding ray, selecting as a candidate the boundary point with the highest normalized cross correlation along each ray. These boundary points were then regularized using the PDM. For the whole gland, we measured a mean±std MAD of 2.5±0.7 mm, DSC of 80±4%, and ΔV of 1.1±8.8 cc. We also provided an anatomic breakdown of these metrics within the prostatic base, mid-gland, and apex.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.