Attenuation correction (AC) is important for an accurate interpretation and quantitative analysis of SPECT myocardial perfusion imaging. Dedicated cardiac SPECT systems have invaluable efficacy in the evaluation and risk stratification of patients with known or suspected cardiovascular disease. However, most dedicated cardiac SPECT systems are standalone, not combined with a transmission imaging capability such as computed tomography (CT) for generating attenuation maps for AC. To address this problem, we propose to apply a conditional generative adversarial network (cGAN) for generating attenuation-corrected SPECT images (SPECTGAN ) directly from non-corrected SPECT images (SPECTNC) in image domain as a one-step process without requiring additional intermediate step. The proposed network was trained and tested for 100 cardiac SPECT/CT data from a GE Discovery NM 570c SPECT/CT, collected retrospectively at Yale New Haven Hospital.The generated images were evaluated quantitatively through the normalized root mean square error (NRMSE), peak signal to noise ratio (PSNR), and structural similarity index (SSIM) and statistically through joint histogram and error maps. In comparison to the reference CT-based correction (SPECTCTAC ), NRMSEs were 0.2258±0.0777 and 0.1410±0.0768 (37.5% reduction of errors); PSNRs 31.7712±2.9965 and 36.3823±3.7424 (14.5% improvement in signal to noise ratio); SSIMs 0.9877±0.0075 and 0.9949±0.0043 (0.7% improvement in structural similarity) for SPECTNC and SPECTGAN , respectively. This work demonstrates that the conditional adversarial training can achieve accurate CT-less attenuation correction for SPECT MPI, that is quantitatively comparable to CTAC. Standalone dedicated cardiac SPECT scanners can benefit from the proposed GAN to reduce attenuation artifacts efficiently.
KEYWORDS: Heart, Data modeling, Positron emission tomography, Reconstruction algorithms, Magnetic resonance imaging, Motion models, Data acquisition, Cardiovascular magnetic resonance imaging, 3D modeling, Mathematical modeling
In several nuclear cardiac imaging applications (SPECT and PET), images are formed by reconstructing tomographic data using an iterative reconstruction algorithm with corrections for physical factors involved in the imaging detection process and with corrections for cardiac and respiratory motion. The physical factors are modeled as coefficients in the matrix of a system of linear equations and include attenuation, scatter, and spatially varying geometric response. The solution to the tomographic problem involves solving the inverse of this system matrix. This requires the design of an iterative reconstruction algorithm with a statistical model that best fits the data acquisition. The most appropriate model is based on a Poisson distribution. Using Bayes Theorem, an iterative reconstruction algorithm is designed to determine the maximum a posteriori estimate of the reconstructed image with constraints that maximizes the Bayesian likelihood function for the Poisson statistical model. The a priori distribution is formulated as the joint entropy (JE) to measure the similarity between the gated cardiac PET image and the cardiac MRI cine image modeled as a FE mechanical model. The developed algorithm shows the potential of using a FE mechanical model of the heart derived from a cardiac MRI cine scan to constrain solutions of gated cardiac PET images.
Systemic hypertension is a causative factor in left ventricular hypertrophy (LVH). This study is motivated by the potential to reverse or manage the dysfunction associated with structural remodeling of the myocardium in this pathology. Using diffusion tensor magnetic resonance imaging, we present an analysis of myocardial fiber and laminar sheet orientation in ex vivo hypertrophic (6 SHR) and normal (5 WKY) rat hearts using the covariance of the diffusion tensor. First, an atlas of normal cardiac microstructure was formed using the WKY b0 images. Then, the SHR and WKY b0 hearts were registered to the atlas. The acquired deformation fields were applied to the SHR and WKY heart tensor fields followed by the preservation of principal direction (PPD) reorientation strategy. A mean tensor field was then formed from the registered WKY tensor images. Calculating the covariance of the registered tensor images about this mean for each heart, the hypertrophic myocardium exhibited significantly increased myocardial fiber derangement (p=0.017) with a mean dispersion of 38.7 deg, and an increased dispersion of the laminar sheet normal (p=0.030) of 54.8 deg compared with 34.8 deg and 51.8 deg, respectively, in the normal hearts. Results demonstrate significantly altered myocardial fiber and laminar sheet structure in rats with hypertensive LVH.
Factor analysis of dynamic structures (FADS) is a methodology of extracting time-activity curves (TACs) for corresponding different tissue types from noisy dynamic images. The challenges of FADS include long computation time and sensitivity to the initial guess, resulting in convergence to local minima far from the true solution. We propose a method of accelerating and stabilizing FADS application to sequences of dynamic PET images by adding preliminary cluster analysis of the time activity curves for individual voxels. We treat the temporal variation of individual voxel concentrations as a set of time-series and use a partial clustering analysis to identify the types of voxel TACs that are most functionally distinct from each other. These TACs provide a good initial guess for the temporal factors for subsequent FADS processing. Applying this approach to a set of single slices of dynamic 11C-PIB images of the brain allows identification of the arterial input function and two different tissue TACs that are likely to correspond to the specific and non-specific tracer binding-tissue types. These results enable us to perform direct classification of tissues based on their pharmacokinetic properties in dynamic PET without relying on a compartment-based kinetic model, without identification of the reference region, or without using any external methods of estimating the arterial input function, as needed in some techniques.
Motion is a serious artifact in Cardiac nuclear imaging because the scanning operation takes a long time.
Since reconstruction algorithms assume consistent or stationary data the quality of resulting image is affected by
motion, sometimes significantly. Even after adoption of the gold standard MoCo(R) algorithm from Cedars-Sinai by
most vendors, heart motion remains a significant challenge. Also, any serious study in quantitative analysis
necessitates correction for motion artifacts. It is generally recognized that human eye is a very sensitive tool for
detecting motion. However, two reasons prevent such manual correction: (1) it is costly in terms of specialist's time,
and (2) no such tool for manual correction is available currently. Previously, at SPIE-MIC'11, we presented a simple
tool (SinoCor) that allows sinograms to be corrected manually or automatically. SinoCor performs correction of
sinograms containing inter-frame patient or respiratory motions using rigid-body dynamics. The software is capable
of detecting the patient motion and estimating the body-motion vector using scanning geometry parameters. SinoCor
applies appropriate geometrical correction to all the frames subsequent to the frame when the movement has occurred
in a manual or automated mode. For respiratory motion, it is capable of automatically smoothing small oscillatory
(frame-wise local) movements. Lower order image moments are used to represent a frame and the required rigid body
movement compensation is computed accordingly. Our current focus is on enhancement of SinoCor with the
capability to automatically detect and compensate for intra-frame motion that causes motion blur on the respective
frame. Intra-frame movements are expected in both patient and respiratory motions. For a controlled study we also
have developed a motion simulator. A stable version of SinoCor is available under license from Lawrence Berkeley
National Laboratory.
We present a simple method for correcting patient motion in SPECT. The targeted type of motion is a
momentary shift in patient's body position due to coughing, sneezing or a need to shift weight during a long scan.
When detected by the radiologist, this motion sometimes causes the scan data being discarded and the scan being
repeated, thus imposing extra costs and unnecessary health risks to the patients. We propose a partial solution to this
problem in the form of a graphical user interface based software tool SinoCor, integrated with the sinogram viewing
software that allows instant correction for the simplest types of motion. When used during the initial check of the scan
data, this tool allows the technologists to interactively detect the instances of motion and determine the parameters of
the motion by achieving consistent picture of the sinogram. Two types of motion are corrected by using the
algorithms: translational motion of the patient and small angle rotation about in-plane axes. All of the motion
corrections are performed at the sinogram level, after which the images may be reconstructed using
hospital's/organization's standard reconstruction software. SinoCor is platform independent, it requires no
modification of the acquisition protocol and other processing software, and it requires minimal personnel training. In
this article we describe the principal architecture of SinoCor software and illustrate its performance using both a
phantom and a patient scan data.
This work studies the dual formulation of a penalized maximum likelihood reconstruction problem in x-ray
CT. The primal objective function is a Poisson log-likelihood combined with a weighted cross-entropy penalty
term. The dual formulation of the primal optimization problem is then derived and the optimization procedure
outlined. The dual formulation better exploits the structure of the problem, which translates to faster convergence
of iterative reconstruction algorithms. A gradient descent algorithm is implemented for solving the dual problem
and its performance is compared with the filtered back-projection algorithm, and with the primal formulation
optimized by using surrogate functions. The 3D XCAT phantom and an analytical x-ray CT simulator are used
to generate noise-free and noisy CT projection data set with monochromatic and polychromatic x-ray spectrums.
The reconstructed images from the dual formulation delineate the internal structures at early iterations better
than the primal formulation using surrogate functions. However the body contour is slower to converge in the
dual than in the primal formulation. The dual formulation demonstrate better noise-resolution tradeoff near the
internal organs than the primal formulation. Since the surrogate functions in general can provide a diagonal
approximation of the Hessian matrix of the objective function, further convergence speed up may be achieved
by deriving the surrogate function of the dual objective function.
KEYWORDS: 3D modeling, Computed tomography, Heart, Arteries, Image segmentation, Data modeling, Ischemia, Medical imaging, Instrument modeling, 3D image processing
A realistic 3D coronary arterial tree (CAT) has been developed for the heart model of the computer generated 3D
XCAT phantom. The CAT allows generation of a realistic model of the location, size and shape of the associated
regional ischemia or infarction for a given coronary arterial stenosis or occlusion. This in turn can be used in medical
imaging applications. An iterative rule-based generation method that systematically utilized anatomic, morphometric
and physiologic knowledge was used to construct a detailed realistic 3D model of the CAT in the XCAT phantom. The
anatomic details of the myocardial surfaces and large coronary arterial vessel segments were first extracted from cardiac
CT images of a normal patient with right coronary dominance. Morphometric information derived from porcine data
from the literature, after being adjusted by scaling laws, provided statistically nominal diameters, lengths, and
connectivity probabilities of the generated coronary arterial segments in modeling the CAT of an average human. The
largest six orders of the CAT were generated based on the physiologic constraints defined in the coronary generation
algorithms. When combined with the heart model of the XCAT phantom, the realistic CAT provides a unique
simulation tool for the generation of realistic regional myocardial ischemia and infraction. Together with the existing
heart model, the new CAT provides an important improvement over the current 3D XCAT phantom in providing a more
realistic model of the normal heart and the potential to simulate myocardial diseases in evaluation of medical imaging
instrumentation, image reconstruction, and data processing methods.
Enhancements are described for an approach that unifies edge preserving smoothing with segmentation of time sequences of volumetric images, based on differential edge detection at multiple spatial and temporal scales. Potential applications of these 4-D methods include segmentation of respiratory gated positron emission tomography (PET) transmission images to improve accuracy of attenuation correction for imaging heart and lung lesions, and segmentation of dynamic cardiac single photon emission computed tomography (SPECT) images to facilitate unbiased estimation of time-activity curves and kinetic parameters for left ventricular volumes of interest. Improved segmentation of lung surfaces in simulated respiratory gated cardiac PET transmission images is achieved with a 4-D edge detection operator composed of edge preserving 1-D operators applied in various spatial and temporal directions. Smoothing along the axis of a 1-D operator is driven by structure separation seen in the scale-space fingerprint, rather than by image contrast. Spurious noise structures are reduced with use of small-scale isotropic smoothing in directions transverse to the 1-D operator axis. Analytic expressions are obtained for directional derivatives of the smoothed, edge preserved image, and the expressions are used to compose a 4-D operator that detects edges as zero-crossings in the second derivative in the direction of the image intensity gradient. Additional improvement in segmentation is anticipated with use of multiscale transversely isotropic smoothing and a novel interpolation method that improves the behavior of the directional derivatives. The interpolation method is demonstrated on a simulated 1-D edge and incorporation of the method into the 4-D algorithm is described.
All of the exact cone-beam reconstruction algorithms for so called long-object problem do not use equi-angular sampling but use equi-space sampling. However, cylindrical detectors (equi-angular sampling in xy and equi-spatial in z) have advantage in their compact design. Therefore, toward the long-object problem with equi-angular sampling, the purpose of this study is to develop a cone-beam reconstruction algorithm using equi-angular sampling for short-object problem. A novel implementation of Grangeat's algorithm using equi-angular sampling has been developed for short- object problem with and without detector truncation. First, both cone-beam projection g(Psi )((Theta) ,(phi) ) and the first derivative of plane integral (3D Radon transform) p(Psi )((Theta) ,(phi) ) are described using spherical harmonics with equi-angular sampling. Then, using Grangeat's formula, relationship between coefficients of spherical harmonics for g(Psi )((Theta) ,(phi) ) and p(Psi )((Theta) ,(phi) ) are found. Finally, a method has been developed to obtain p(Psi )((Theta) ,(phi) ) from cone- beam projection data in which the object is partially scanned. Images are reconstructed using the 3D Radon backprojection with rebinning. Computer simulations were performed in order to verify this approach: Isolated (axially bounded) objects were scanned both with circular and helical orbits. Wen the orbit of the cone vertex does not satisfy Tuy's data sufficiency conditions, strong oblique shadows and blurring in the axial direction were shown in reconstructed coronal images. ON the other hand, if the trajectory satisfied Tuy's data sufficiency condition, the proposed algorithm provides an exact reconstruction. In conclusion, a novel implementation of the Grangeat's algorithm for cone-beam image reconstruction using equi-angular sampling has been developed.
One of the most commonly performed imaging procedures in nuclear medicine is the lung scan for suspected pulmonary embolism. The purpose of this research was to develop an expert system that interprets lung scans and gives a probability of pulmonary embolism. Three standard ventilation and eight standard perfusion images are first outlined manually. Then the images are normalized. Because lung size varies from patient to patient, each image undergoes a two-dimensional stretch onto a standard-size mask. To determine the presence of regional defects in ventilation or perfusion, images are then compared on a pixel by pixel basis with a normal database. This database consists of 21 normal studies that represent the variation in activity between subjects. Any pixel that falls more than 2.2 standard deviations below the normal file is flagged as possibly abnormal. To reduce statistical fluctuations, a clustering criteria is applied such that each pixel must have at least two continuous neighbors that are abnormal for a pixel to be flagged abnormal.
KEYWORDS: Fluctuations and noise, Reconstruction algorithms, Sensors, Detection and tracking algorithms, Single photon emission computed tomography, Image processing, Signal attenuation, Image quality, Convolution, Data conversion
The purpose of this paper is to present a fan beam short scan reconstruction technique for non- circular detector orbits. It applies a smoothing window to the projection data, and then a modified filtered backprojection method is performed. This technique is exact and the smoothing window is orbit independent; yet it requires more data than current circular-orbit short scan algorithms.
The attenuated Radon transform mathematically represents the measured projections in single photon emission computed tomography (SPECT) for an ideal detector with a delta geometric response function and no detected scattered photons. As a special case of the attenuated Radon transform, the exponential Radon transform is defined for a constant attenuator by modifying the measured projections through a transformation which places the detector at the center of rotation. Several papers have presented analytical spectral decompositions of the Radon transform; however, no analytical decomposition of the exponential or the attenuated Radon transform has been derived. Here an eigenanalysis of the exponential Radon transform is compared with that of the Radon transform using the Galerkin approximation to estimate the spectral decomposition. The condition number of the spectrum increases with increased attenuation coefficient which correlates with the increase in statistical error propagation seen in clinical images obtained with low energy radionuclides.
The purpose of this paper is to present a new cone beam short scan reconstruction technique for circular and non-circular detector orbits. The reconstruction is performed by a filtered backprojection method. The short scan reconstruction technique is first investigated for fan beam circular orbits and is then extended to fan beam noncircular orbits. Finally a straightforward approach is applied to cone beam geometries and a modified Feldkamp algorithm is used. Since the projection data are incomplete cone beam reconstructions are only approximations. I.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.