Generating pseudo-CT images from MRI provides electron density maps for radiation therapy planning and saves additional CT scans. Fully convolutional neural networks were proposed for pseudo-CT generation. We investigated the influence of architectures and hyperparameters on the quality of the pseudo-CT images. We used fully convolutional neural networks to transform between registered MRI and CT volumes of the pelvic region: two UNet variants using transposed convolutions or bilinear upsampling, LinkNet using residual blocks and strided convolutions for downsampling, and we designed transnet to maintain tensor spatial dimensions equal to the image’s size. Different architectures revealed similar error metrics, although pseudo-CTs differ visually. Comparison of LinkNet and UNet showed that downsampling does not affect translation. Replacing transposed convolutions with bilinear upsampling improved the pseudo-CTs’ sharpness. Translation quality quickly saturates with the number of convolution layers; increasing the number of layers from 4 to 19 decreases the MAE from 44HU to 37HU. Varying the number of feature maps showed that good translation quality can be achieved with networks that are substantially narrower than those previously published. Generally, the pseudo-CT have MAE lower than 45HU, computed inside of the true CT’s body shape.
Combined PET/MR imaging allows to incorporate the high-resolution anatomical information delivered by MRI into the PET reconstruction algorithm for improvement of PET accuracy beyond standard corrections. We used the working hypothesis that glucose uptake in adipose tissue is low. Thus, our aim was to shift 18F-FDG PET signal into image regions with a low fat content. Dixon MR imaging can be used to generate fat-only images via the water/fat chemical shift difference. On the other hand, the Origin Ensemble (OE) algorithm, a novel Markov chain Monte Carlo method, allows to reconstruct PET data without the use of forward- and back projection operations. By adequate modifications to the Markov chain transition kernel, it is possible to include anatomical a priori knowledge into the OE algorithm. In this work, we used the OE algorithm to reconstruct PET data of a modified IEC/NEMA Body Phantom simulating body water/fat composition. Reconstruction was performed 1) natively, 2) informed with the Dixon MR fat image to down-weight 18F-FDG signal in fatty tissue compartments in favor of adjacent regions, and 3) informed with the fat image to up-weight 18F-FDG signal in fatty tissue compartments, for control purposes. Image intensity profiles confirmed the visibly improved contrast and reduced partial volume effect at water/fat interfaces. We observed a 17±2% increased SNR of hot lesions surrounded by fat, while image quality was almost completely retained in fat-free image regions. An additional in vivo experiment proved the applicability of the presented technique in practice, and again verified the beneficial impact of fat-constrained OE reconstruction on PET image quality.
Prostate and cervix cancer diagnosis and treatment planning that is based on MR images benefit from superior soft tissue contrast compared to CT images. For these images an automatic delineation of the prostate or cervix and the organs at risk such as the bladder is highly desirable. This paper describes a method for bladder segmentation that is based on a watershed transform on high image gradient values and gray value valleys together with the classification of watershed regions into bladder contents and tissue by a graph cut algorithm. The obtained results are superior if compared to a simple region-after-region classification.
With the recent introduction of combined Magnetic Resonance Imaging (MRI) / Positron Emission Tomography (PET)
systems, the generation of attenuation maps for PET based on MR images gained substantial attention. One approach for
this problem is the segmentation of structures on the MR images with subsequent filling of the segments with respective
attenuation values. Structures of particular interest for the segmentation are the pelvis bones, since those are among the
most heavily absorbing structures for many applications, and they can serve at the same time as valuable landmarks for
further structure identification. In this work the model-based segmentation of the pelvis bones on gradient-echo MR
images is investigated. A processing chain for the detection and segmentation of the pelvic bones is introduced, and the
results are evaluated using CT-generated "ground truth" data. The results indicate that a model based segmentation of the
pelvis bone is feasible with moderate requirements to the pre- and postprocessing steps of the segmentation.
KEYWORDS: Image segmentation, Positron emission tomography, Tissues, Signal attenuation, Magnetic resonance imaging, Lung, Monte Carlo methods, Image processing, Image processing algorithms and systems, Breast
Recently introduced combined PET/MR scanners need to handle the specific problem that a limited MR field of view
sometimes truncates arm or body contours, which prevents an accurate calculation of PET attenuation correction maps.
Such maps of attenuation coefficients over body structures are required for a quantitatively correct PET image
reconstruction. This paper addresses this problem by presenting a method that segments a preliminary reconstruction
type of PET images, time of flight non-attenuation corrected (ToF-NAC) images, and outlining a processing pipeline that
compensates the arm or body truncation with this segmentation. The impact of this truncation compensation is
demonstrated together with a comparison of two segmentation methods, simple gray value threshold segmentation and a
watershed algorithm on a gradient image. Our results indicate that with truncation compensation a clinically tolerable
quantitative SUV error is robustly achievable.
Quantification of potentially cancerous lesions from imaging modalities, most prominently from CT or PET
images, plays a crucial role both in diagnosing and staging of cancer as well as in the assessment of the response
of a cancer to a therapy, e.g. for lymphoma or lung cancer. For PET imaging, several quantifications which might
bear great discriminating potential (e.g. total tumor burden or total tumor glycolysis) involve the segmentation
of the entirety of all of the cancerous lesions. However, this particular task of segmenting the entirety of all
cancerous lesions might be very tedious if it has to be done manually, in particular if the disease is scattered or
metastasized and thus consists of numerous foci; this is one of the reasons why only few clinical studies on those
quantifications are available. In this work, we investigate a way to aid the easy determination of the entirety of
cancerous lesions in a PET image of a human. The approach is designed to detect all hot spots within a PET
image and rank their probability of being a cancerous lesion. The basis of this component is a modified watershed
algorithm; the ranking is performed on a combination of several, primarily morphological measures derived from
the individual basins. This component is embedded in a software suite to assess response to a therapy based on
PET images. As a preprocessing step, potential lesions are segmented and indicated to the user, who can select
the foci which constitute the tumor and discard the false positives. This procedure substantially simplifies the
segmentation of the entire tumor burden of a patient. This approach of semi-automatic hot spot detection is
evaluated on 17 clinical datasets.
Early response assessment of cancer therapy is a crucial component towards a more effective and patient individualized
cancer therapy. Integrated PET/CT systems provide the opportunity to combine morphologic with
functional information. We have developed algorithms which allow the user to track both tumor volume and
standardized uptake value (SUV) measurements during the therapy from series of CT and PET images, respectively.
To prepare for tumor volume estimation we have developed a new technique for a fast, flexible, and
intuitive 3D definition of meshes. This initial surface is then automatically adapted by means of a model-based
segmentation algorithm and propagated to each follow-up scan. If necessary, manual corrections can be added by
the user. To determine SUV measurements a prioritized region growing algorithm is employed. For an improved
workflow all algorithms are embedded in a PET/CT therapy monitoring software suite giving the clinician a
unified and immediate access to all data sets. Whenever the user clicks on a tumor in a base-line scan, the
courses of segmented tumor volumes and SUV measurements are automatically identified and displayed to the
user as a graph plot. According to each course, the therapy progress can be classified as complete or partial
response or as progressive or stable disease. We have tested our methods with series of PET/CT data from 9
lung cancer patients acquired at Princess Margaret Hospital in Toronto. Each patient underwent three PET/CT
scans during a radiation therapy. Our results indicate that a combination of mean metabolic activity in the
tumor with the PET-based tumor volume can lead to an earlier response detection than a purely volume based
(CT diameter) or purely functional based (e.g. SUV max or SUV mean) response measures. The new software
seems applicable for easy, faster, and reproducible quantification to routinely monitor tumor therapy.
Response assessment of cancer therapy is a crucial component towards a more effective and patient individualized cancer therapy. Integrated PET/CT systems provide the opportunity to combine morphologic with functional information. However, dealing simultaneously with several PET/CT scans poses a serious workflow problem. It can be a difficult and tedious task to extract response criteria based upon an integrated analysis of PET and CT images and to track these criteria over time. In order to improve the workflow for serial analysis of PET/CT scans we introduce in this paper a fast lesion tracking algorithm. We combine a global multi-resolution rigid registration algorithm with a local block matching and a local region growing algorithm. Whenever the user clicks on a lesion in the base-line PET scan the course of standardized uptake values (SUV) is automatically identified and shown to the user as a graph plot. We have validated our method by a data collection from 7 patients. Each patient underwent two or three PET/CT scans during the course of a cancer therapy. An experienced nuclear medicine physician manually measured the courses of the maximum SUVs for altogether 18 lesions. As a result we obtained that the automatic detection of the corresponding lesions resulted in SUV measurements which are nearly identical to the manually measured SUVs. Between 38 measured maximum SUVs derived from manual and automatic detected lesions we observed a correlation of 0.9994 and a average error of 0.4 SUV units.
Respiratory motion is a complicating factor in radiation therapy, tumor ablation, and other treatments of the
thorax and upper abdomen. In most cases, the treatment requires a demanding knowledge of the location of
the organ under investigation. One approach to reduce the uncertainty of organ motion caused by breathing is
to use prior knowledge of the breathing motion. In this work, we extract lung motion fields of seven patients
in 4DCT inhale-exhale images using an iterative shape-constrained deformable model approach. Since data was
acquired for radiotherapy planning, images of the same patient over different weeks of treatment were available.
Although, respiratory motion shows a repetitive character, it is well-known that patient's variability in breathing
pattern impedes motion estimation. A detailed motion field analysis is performed in order to investigate the
reproducibility of breathing motion over the weeks of treatment. For that purpose, parameters being significant
for breathing motion are derived. The analysis of the extracted motion fields provides a basis for a further
breathing motion prediction. Patient-specific motion models are derived by averaging the extracted motion
fields of each individual patient. The obtained motion models are adapted to each patient in a leave-one-out test
in order to simulate motion estimation to unseen data. By using patient-specific mean motion models 60% of
the breathing motion can be captured on average.
An automated segmentation of lung lobes in thoracic CT images is of interest for various diagnostic purposes like the
quantification of emphysema or the localization of tumors within the lung. Although the separating lung fissures are
visible in modern multi-slice CT-scanners, their contrast in the CT-image often does not separate the lobes completely.
This makes it impossible to build a reliable segmentation algorithm without additional information. Our approach uses
general anatomical knowledge represented in a geometrical mesh model to construct a robust lobe segmentation, which
even gives reasonable estimates of lobe volumes if fissures are not visible at all. The paper describes the generation of
the lung model mesh including lobes by an average volume model, its adaptation to individual patient data using a
special fissure feature image, and a performance evaluation over a test data set showing an average segmentation
accuracy of 1 to 3 mm.
Elastic registration of medical images is an active field of current research. Registration algorithms have to be validated in order to show that they fulfill the requirements of a particular clinical application. Furthermore, validation strategies compare the performance of different registration algorithms and can hence judge which algorithm is best suited for a target application. In the literature, validation strategies for rigid registration algorithms have been analyzed. For a known ground truth they assess the displacement error at a few landmarks, which is not sufficient for elastic transformations described by a huge number of parameters. Hence we consider the displacement error averaged over all pixels in the whole image or in a region-of-interest of clinical relevance. Using artificially, but realistically deformed images of the application domain, we use this quality measure to analyze an elastic registration based on transformations defined on adaptive irregular grids for the following clinical applications: Magnetic Resonance (MR) images of freely moving joints for orthopedic investigations, thoracic Computed Tomography (CT) images for the detection of pulmonary embolisms, and transmission images as used for the attenuation correction and registration of independently acquired Positron Emission Tomography (PET) and CT images. The definition of a region-of-interest allows to restrict the analysis of the registration accuracy to clinically relevant image areas. The behaviour of the displacement error as a function of the number of transformation control points and their placement can be used for identifying the best strategy for the initial placement of the control points.
KEYWORDS: Image registration, Magnetic resonance imaging, Mammography, Tissues, Data modeling, Data acquisition, Medical imaging, Blood, Optimization (mathematics), 3D image processing
Dynamic contrast enhanced (DCE) MRI mammography is currently receiving much interest in clinical research. It bears the potential to discriminate between benign and malignant lesions by analysis of the contrast uptake of the lesion. However, a registration of the individual images of a contrast-uptake series is crucial in order to avoid motion artefacts in the uptake curves, which could affect the diagnosis. It is on the other hand well known from the registration literature that a registration that uses a standard similarity measure (e.g. mean sum of squared differences, cross-correlation) may cause artefacts if contrast agent is taken up between the images to be registered. Thus we propose a registration on the basis of an application-specific similarity measure that explicitly uses features of the contrast uptake. We report initial results using this registration method.
The purpose of this paper is to present an automated method for the extraction of the pulmonary vessel tree from multi-slice CT data. Furthermore we investigate a method for the separation of pulmonary arteries from veins. The vessel tree extraction is performed by a seed-point based front-propagation algorithm. This algorithm is based on a similar methodology as the bronchial tree segmentation and coronary artery tree extraction methods presented at earlier SPIE conferences. Our method for artery/vein separation is based upon the fact that the pulmonary artery tree accompanies the bronchial tree. For each extracted vessel segment, we evaluate a measure of "arterialness". This measure combines two components: a method for identifying candidate positions for a bronchus running in the vicinity of a given vessel on the one hand and a co-orientation measure for the vessel segment and bronchus candidates. The latter component rewards vessels running parallel to a nearby bronchus. The spatial orientation of vessel segments and bronchi is estimated by applying the structure tensor to the local gray-value neighbourhood. In our experiments we used multi slice CT datasets of the lung acquired by Philips IDT 16-slice, and Philips Brilliance 40-slice scanners. It can be shown that the proposed measure reduces the number of pulmonary veins falsely included into the arterial tree.
Computed Tomography Angiography (CTA) is an emerging modality for assessing cardiac anatomy. The delineation of the cardiac volume of interest (VOI) is a pre-processing step for subsequent visualization or image processing. It serves the suppression of anatomic structures being not in the primary focus of the cardiac application, such as sternum, ribs, spinal column, descending aorta and pulmonary vasculature. These structures obliterate standard visualizations such as direct volume renderings or maximum intensity projections. In addition, outcome and performance of post-processing steps such as ventricle suppression, coronary artery segmentation or the detection of short and long axes of the heart can be improved. The structures being part of the cardiac VOI (coronary arteries and veins, myocardium, ventricles and atria) differ tremendously in appearance. In addition, there is no clear image feature associated with the contour (or better cut-surface) distinguishing between cardiac VOI and surrounding tissue making the automatic delineation of the cardiac VOI a difficult task. The presented approach locates in a first step chest wall and descending aorta in all image slices giving a rough estimate of the location of the heart. In a second step, a Fourier based active contour approach delineates slice-wise the border of the cardiac VOI. The algorithm has been evaluated on 41 multi-slice CT data-sets including cases with coronary stents and venous and arterial bypasses. The typical processing time amounts to 5-10s on a 1GHz P3 PC.
Multislice CT angiography (MSCTA) is an emerging modality for assessing the coronary arteries. The use of MSCTA for coronary artery disease (CAD) quantification requires an assessment procedure of the coronary arteries that is automated as much as possible. We present an algorithm for the segmentation of the coronary tree with simultaneous extraction of the centerline and the tree-structure. Our approach limits the required user interaction to the placement of one landmark in the left and right main coronary artery respectively. The whole segmentation process takes about 15 s on a mid-sized PC (1GHz) including a real-time visualization of the segmentation in progress.
The presented method combines a fast region expansion method (fast marching/front propagation) with heuristic reasoning. The spreading front is monitored for front-splitting enabling branch detection and simultaneous tree reconstruction of the segmented object. This approach allows for the individual treatment of tree-branches with respect to, e.g., threshold settings and reasoning on tree and sub-tree level. This approach can be applied quite generally to the segmentation of tree-like structures.
The segmentation results support efficient reporting by enabling automatic generation of overview visualizations, guidance for virtual endoscopy, generation of curved MPRs along the vessels, or cross-sectional area graphs.
During the last couple of years virtual endoscopic systems (VES) have emerged as standard tools that are nowadays close to be utilized in daily clinical practice. Such tools render hollow human structures, allowing a clinician to visualize their inside in an endoscopic-like paradigm. It is common practice that the camera of a virtual endoscope is attached to the centerline of the structure of interest, to facilitate navigation. This centerline has to be determined manually or automatically, prior to an investigation. While there exist techniques that can straightforwardly handle simple tube-like structures (e.g. colon, aorta), structures like the tracheobronchial tree still represent a challenge due to their complex branching. In these cases it is necessary to determine all branching points within the tree which is - because of the complexity - impractical to be accomplished in a manual manner. This paper presents a simultaneous segmentation/skeletonization algorithm that extracts all major airway branches and large parts of the minor distal branches (up to 7th order) using a front propagation approach. During the segmentation the algorithm keeps track of the centerline of the segmented structure and detects all branching points. This in turn allows the full reconstruction of the tracheobronchial tree.
Gain processes in connection with exciton localization in wide gap quantum wells ar investigated theoretically. A simulation of uncorrelated composition fluctuations in (Zn,Cd)Se as well as (Ga,In)N QWs yields considerable densities of localization sites. The localization energies calculated for representative site ensembles show that up to two excitons can be localized at every site. The bi-exciton is even stronger localized than the single exciton. A rate equation model is used for the occupation kinetics of the localize exciton-bi-exciton system. Taking further into account the fourfold spin degeneracy of the exciton state, it is shown that the localized bi-excitons provide more gain and at lower densities than localized single excitons. These results are in agreement with the experimentally detected low-density gain regime in Zn0.8Cd0.2Se QWs. Conditions for extending this gain regime up to room temperature are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.