KEYWORDS: Denoising, Education and training, 3D modeling, Single photon emission computed tomography, 3D image processing, Perfusion imaging, Network architectures, 3D acquisition, Mathematical optimization, Image processing
The purpose of this research is to address the critical challenge of improving the detectability of small perfusion defects in deep learning (DL) denoising for low dose Myocardial Perfusion Imaging (MPI) with Single-Photon Emission Computed Tomography (SPECT). By developing a 3D convolutional auto-encoder (CAE) incorporated with an edge-preservation mechanism, the study aims to mitigate potential blurring effects associated with DL-based denoising methods. The CAE is optimized to enhance noise reduction on low-dose SPECT-MPI scans while seeking to maintain the integrity of image-edge features which are vital for preserving subtle myocardial perfusion defects after denoising.
Ongoing developments in the field of molecular imaging have increased the need for gamma-ray detectors with better spatial resolution, while maintaining a large detection area. One approach to improve spatial resolution is to utilize smaller light sensors for finer sampling of scintillation light distribution. However, the number of required sensors per camera must increase significantly, which in turn increases the complexity of the imaging system. Examples of challenges that arise are the analog-to-digital conversion of large numbers of channels, and a bottleneck effect that results from transferring large amounts of raw list-mode data to an acquisition computer. Here we present the design of a read-out electronics system that addresses these challenges. The read-out system, which is designed for a 10” × 10” SiPM-based scintillation gamma-ray camera, can process up to 162 light-sensor signals per event. This is achieved by implementing 1-bit and non-uniform 2-bit sigma-delta modulation analogto-digital conversion, and an on-board processing system with a large number of input/output user pins and relatively high processing power. The processor is a system-on-a-module that also has SDRAM, which allows us to buffer raw list-mode data on board. The bottleneck effect is avoided by buffering event data on the camera module, and only transferring it when the main acquisition computer requests it. This design can be adapted for other crystal/sensor configurations, and can be scaled for a different number of channels.
Single-photon emission computed tomography (SPECT) performed with a pinhole collimator often suffers from parallax error due to depth-of-interaction uncertainty. One possible way to reduce the parallax error for a new generation of SPECT pinhole cameras would be to incorporate fiber optics to control the spread of light and improve 3D position estimation. In this work, we have developed a Monte Carlo simulation for an SiPMbased modular gamma camera that incorporates a fiber optic-plate as a light guide. We have created a custom photon transport code written in Swift and we perform the computationally taxing components on a GPU using Metal. This code includes refraction according to Snell’s law as well as reflection according to Fresnel’s laws at material boundaries. The plate is modeled as a hexagonally-packed array of individual fibers. We also include the scintillation statistics of NaI(Tl) and the detection efficiency of the silicon photomultipliers. We use the simulation code to create mean-detector-response functions (MDRFs) from which Fisher information on event positioning can be assessed. We compare planar detectors with different light guides to determine the effects of the fiber optics. We model three geometries; one that only uses a monolithic light guide, one that only has a fiber-optic plate, and one that has a monolithic light guide and a fiber-optic plate in combination. The spatial resolutions are compared by using Fisher Information Matrices to calculate the Cram´er-Rao Lower Bounds on position estimate variances.
The use of accurate system response modeling has been proven to be an essential key of SPECT image reconstruction, with its usage leading to overall improvement of image quality. The aim of this work was to investigate the imaging performance using an XCAT brain perfusion phantom of two modeling strategies, one based on analytic techniques and the other one based on GATE Monte-Carlo simulation. In addition, an efficient forced detection approach to improve the overall simulation efficiency was implemented and its performance was evaluated. We demonstrated that accurate modeling of the system matrix generated by Monte-Carlo simulation for iterative reconstruction leads to superior performance compared to analytic modeling in the case of clinical 123I brain imaging. It was also shown that the use of the forced detection approach provided a quantitative and qualitative enhancement of the reconstruction.
KEYWORDS: Monte Carlo methods, Systems modeling, Single photon emission computed tomography, Reconstruction algorithms, Data acquisition, Data modeling, Computer simulations
We introduce a generic analytic simulation and image reconstruction software platform for multi-pinhole (MPH) SPECT systems. The platform is capable of modeling common or sophisticated MPH designs as well as complex data acquisition schemes. Graphics processing unit (GPU) acceleration was utilized to make a high-performance computing software. Herein, we describe the software platform and provide verification studies of the simulation and image reconstruction software.
We report our investigation of system designs and 3D reconstruction for a dedicated brain-imaging SPECT system using multiple square or square and hexagonal detector modules. The system employs shuttering to vary which of multiple pinhole apertures are enabled to pass photons through to irradiate the detectors. Both multiplexed and nonmultiplexed irradiation by the pinholes are investigated. Sampling is assessed by simulated imaging of a uniform activity concentration in a spherical tub filling the VOI and a tailored Defrise phantom consisting of a series of activity containing slabs aligned axially. Potential image quality for clinical imaging is assessed through simulated imaging of an XCAT brain phantom with an activity distribution simulating perfusion imaging.
Millimeter-wave technologies for the automotive industry are driving inexpensive source/receiver hardware solutions for a wide variety of applications. In order to accurately assess signature characteristics of various scenes, we tested the appropriateness of using an artificial torso in controlled environments and compared the results to data from live subjects. High-range resolution (HRR) backscatter Radar Cross Section (RCS) data from targets and in-scene calibration objects were obtained using a 75GHz transceiver with 8GHz bandwidth. Data was collected for both the artificial torso and live subjects at varying aspects in controlled environments – this included studying the RCS response at different illumination angles while calibrating the response using in-scene calibration targets. Comparing the HRR profiles has allowed UML/UMMS researchers to accurately assess and demonstrate the utilization of artificial constructs in scenes for testing the system response characteristics.
Transmission measurements of 11 different garments composed of different materials and different thickness under different conditions were measured. The setup consisted of a 100 Gigahertz camera system which used an IMPATT diode (66mW power output) as the source, a 32x32 image sensor array (1.5x1.5mm pixels, 1 nW/√Hz Noise Equivalent Power) focused with PTFE lens (50mm focal length). The camera system was configured for reflection imaging by placing the source emitter and imaging array at an off-axis angle and focused on a large flat mirrored surface. To simulate reflection of the emitted signal off human skin after transmission through the garments, we placed the garments over the mirrored surface. We then calculated the transmission loss, in terms of signal strength (amplitude), as the ratio of the recorded images with and without the garments. The materials and make-up of the garments were recorded, such as colors, accents, and thickness. To increase the realism of the data, we added several conditions for each garment transmission recording that included overlapping wrinkles and multiple garment layers. We were able to confirm transmission results reported from other research groups, but found that variations such as wrinkles and multiple layers can change the transmission ratios significantly.
We introduce a new approach for designing deep learning algorithms for computed tomography applications. Rather than training generically-structured neural network architectures to equivalently perform imaging tasks, we show how to leverage classical iterative-reconstruction algorithms such as Newton-Raphson and expectation- maximization (EM) to bootstrap network performance to a good initialization-point, with a well-understood baseline of performance. Specifically, we demonstrate a natural and systematic way to design these networks for both transmission-mode x-ray computed tomography (XRCT) and emission-mode single-photon computed tomography (SPECT), highlighting that our method is capable of preserving many of the nice properties, such as convergence and understandability, that is featured in classical approaches. The key contribution of this work is a formulation of the reconstruction task that enables data-driven improvements in image clarity and artifact reduction without sacrificing understandability. In this early work, we evaluate our method on a number of synthetic phantoms, highlighting some of the benefits and difficulties of this machine-learning approach.
The diversity in the patient population necessitates a more refined dose reduction approach in cardiac perfusion imaging. We have recently formulated a strategy to better calculate individual and personalized injected doses using the body mass index. The purpose of this study is to present a practical method to evaluate the efficacy of personalizing injected dose employing the polar map methodology. Two hundred and fifty-two normally read patients were used to either determine the personalized dose or to test the dose reduction strategy. Fifty of the test patient studies were altered by inserting perfusion defects in the LV wall. The original or full dose as well as the personalizes dose data were reconstructed using OSEM (Ordered subsets expectation maximization) with attenuation, scatter and spatial resolution compensation. The ROC results show that the personalized dose strategy does not adversely affect the detection of perfusion defects.
In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient’s head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.
Compressed sensing (CS) [1] is a novel sensing (acquisition) paradigm that applies to discrete-to-discrete system models and asserts exact recovery of a sparse signal from far fewer measurements than the number of unknowns [1- 2]. Successful applications of CS may be found in MRI [3, 4] and optical imaging [5]. Sparse reconstruction methods exploiting CS principles have been investigated for CT [6-8] to reduce radiation dose, and to gain imaging speed and image quality in optical imaging [9]. In this work the objective is to investigate the applicability of compressed sensing principles for a faster brain imaging protocol on a hybrid collimator SPECT system. As a proofof- principle we study the null space of the fan-beam collimator component of our system with regards to a particular imaging object. We illustrate the impact of object sparsity on the null space using pixel and Haar wavelet basis functions to represent a piecewise smooth phantom chosen as our object of interest.
In SPECT imaging, motion from respiration and body motion can reduce image quality by introducing motion-related
artifacts. A minimally-invasive way to track patient motion is to attach external markers to the patient’s body and record
their location throughout the imaging study. If a patient exhibits multiple movements simultaneously, such as respiration
and body-movement, each marker location data will contain a mixture of these motions. Decomposing this complex
compound motion into separate simplified motions can have the benefit of applying a more robust motion correction to
the specific type of motion. Most motion tracking and correction techniques target a single type of motion and either
ignore compound motion or treat it as noise. Few methods account for compound motion exist, but they fail to
disambiguate super-position in the compound motion (i.e. inspiration in addition to body movement in the positive
anterior/posterior direction). We propose a new method for decomposing the complex compound patient motion using an
unsupervised learning technique called Independent Component Analysis (ICA). Our method can automatically detect
and separate different motions while preserving nuanced features of the motion without the drawbacks of previous
methods. Our main contributions are the development of a method for addressing multiple compound motions, the novel
use of ICA in detecting and separating mixed independent motions, and generating motion transform with 12 DOFs to
account for twisting and shearing. We show that our method works with clinical datasets and can be employed to improve
motion correction in single photon emission computed tomography (SPECT) images.
Novel methods of reconstructing the tracer distribution in myocardial perfusion images are being considered for lowcount
and sparse sampling scenarios. Few examples of low count scenarios are when the amount of radioisotope
administered or the acquisition time is lowered, in gated studies where individual gates are reconstructed. Examples of
sparse angular sampling scenarios are patient motion correction in traditional SPECT where few angles are acquired at
any given pose and in multi-pinhole SPECT where the geometry is sparse and truncated by design. The reconstruction
method is based on the assumption that the tracer distribution is sparse in the transform domain, which is enforced by a
sparsity-promoting penalty on the transform coefficients. In this work we investigated the curvelet transform as the
sparse basis for myocardial perfusion SPECT. The objective is to determine if myocardial perfusion images can be
efficiently represented in this transform domain, which can then be exploited in a penalized maximum likelihood (PML)
reconstruction scheme for improving defect detection in low-count/ sparse sampling scenarios. The performance of this
algorithm is compared to standard OSEM with 3D Gaussian post-filtering using bias-variance plots and numerical
observer studies. The Channelized Non-prewhitening Observer (CNPW) was used for defect detection task in a “signalknown-
statistically” LROC study. Preliminary investigations indicate better bias-variance characteristics and superior
CNPW performance with the proposed curvelet basis. However, further assessment using more defect locations and
human observer evaluation is needed for clinical significance.
In SPECT imaging, motion from patient respiration and body motion can introduce image artifacts that may reduce the diagnostic quality of the images. Simulation studies using numerical phantoms with precisely known motion can help to develop and evaluate motion correction algorithms. Previous methods for evaluating motion correction algorithms used either manual or semi-automated segmentation of MRI studies to produce patient models in the form of XCAT Phantoms, from which one calculates the transformation and deformation between MRI study and patient model. Both manual and semi-automated methods of XCAT Phantom generation require expertise in human anatomy, with the semiautomated method requiring up to 30 minutes and the manual method requiring up to eight hours. Although faster than manual segmentation, the semi-automated method still requires a significant amount of time, is not replicable, and is subject to errors due to the difficulty of aligning and deforming anatomical shapes in 3D. We propose a new method for matching patient models to MRI that extends the previous semi-automated method by eliminating the manual non-rigid transformation. Our method requires no user supervision and therefore does not require expert knowledge of human anatomy to align the NURBs to anatomical structures in the MR image. Our contribution is employing the SIMRI MRI simulator to convert the XCAT NURBs to a voxel-based representation that is amenable to automatic non-rigid registration. Then registration is used to transform and deform the NURBs to match the anatomy in the MR image. We show that our automated method generates XCAT Phantoms more robustly and significantly faster than the previous semi-automated method.
KEYWORDS: Magnetic resonance imaging, Image segmentation, Motion models, Data modeling, Single photon emission computed tomography, Image registration, Heart, 3D modeling, Affine motion model, Algorithm development
In SPECT imaging, patient respiratory and body motion can cause artifacts that degrade image quality. Developing and
evaluating motion correction algorithms are facilitated by simulation studies where a numerical phantom and its motion
are precisely known, from which image data can be produced. Previous techniques to test motion correction methods
generated XCAT phantoms modeled from MRI studies and motion tracking but required manually segmenting the major
structures within the whole upper torso, which can take 8 hours to perform. Additionally, segmentation in two
dimensional MRI slices and interpolating into three dimensional shapes can lead to appreciable interpolation artifacts as
well as requiring expert knowledge of human anatomy in order to identify the regions to be segmented within each slice.
We propose a new method that mitigates the long manual segmentation times for segmenting the upper torso. Our
interactive method requires that a user provide only an approximate alignment of the base anatomical shapes from the
XCAT model with an MRI data. Organ boundaries from aligned XCAT models are warped with displacement fields
generated from registering a baseline MR image to MR images acquired during pre-determined motions, which amounts
to automated segmentation each organ of interest. With our method we can show the quality of segmentation is equal
that of expert manual segmentation does not require a user who is an expert in anatomy, and can be completed in
minutes not hours. In some instances, due to interpolation artifacts, our method can generate higher quality models than
manual segmentation.
Polar maps have been used to assist clinicians diagnose coronary artery diseases (CAD) in single photon emission
computed tomography (SPECT) myocardial perfusion imaging. Herein, we investigate the optimization of collimator
design for perfusion defect detection in SPECT imaging when reconstruction includes modeling of the collimator. The
optimization employs an LROC clinical model observer (CMO), which emulates the clinical task of polar map detection
of CAD. By utilizing a CMO, which better mimics the clinical perfusion-defect detection task than previous SKE based
observers, our objective is to optimize collimator design for SPECT myocardial perfusion imaging when reconstruction
includes compensation for collimator spatial resolution. Comparison of lesion detection accuracy will then be employed
to determine if a lower spatial resolution hence higher sensitivity collimator design than currently recommended could be
utilized to reduce the radiation dose to the patient, imaging time, or a combination of both. As the first step in this
investigation, we report herein on the optimization of the three-dimensional (3D) post-reconstruction Gaussian filtering
of and the number of iterations used to reconstruct the SPECT slices of projections acquired by a low-energy generalpurpose
(LEGP) collimator. The optimization was in terms of detection accuracy as determined by our CMO and four
human observers. Both the human and all four CMO variants agreed that the optimal post-filtering was with sigma of
the Gaussian in the range of 0.75 to 1.0 pixels. In terms of number of iterations, the human observers showed a
preference for 5 iterations; however, only one of the variants of the CMO agreed with this selection. The others showed a
preference for 15 iterations. We shall thus proceed to optimize the reconstruction parameters for even higher sensitivity
collimators using this CMO, and then do the final comparison between collimators using their individually optimized
parameters with human observers and three times the test images to reduce the statistical variation seen in our present
results.
KEYWORDS: Motion estimation, 3D modeling, Heart, Motion models, Single photon emission computed tomography, Monte Carlo methods, Signal attenuation, Image quality, Error analysis, Motion measurement
In myocardial perfusion SPECT imaging patient motion during acquisition causes severe artifacts in about 5% of studies.
Motion estimation strategies commonly used are a) data-driven, where the motion may be determined by registration and
checking consistency with the SPECT acquisition data, and b) external surrogate-based, where the motion is obtained
from a dedicated motion-tracking system. In this paper a data-driven strategy similar to a 2D-3D registration scheme
with multiple views is investigated, using a partially reconstructed heart for the 3D model. The partially-reconstructed
heart has inaccuracies due to limited angle artifacts resulting from using only a part of the SPECT projections acquired
while the patient maintained the same pose. The goal of this paper is to compare the performance of different cost-functions
in quantifying consistency with the SPECT projection data in a registration-based scheme for motion
estimation as the image-quality of the 3D model degrades. Six intensity-based metrics- Mean-squared difference (MSD),
Mutual information (MI), Normalized Mutual information NMI), Pattern intensity (PI), normalized cross-correlation
(NCC) and Entropy of the difference (EDI) were studied. Quantitative and qualitative analysis of the performance is
reported using Monte-Carlo simulations of a realistic heart phantom including degradation factors such as attenuation,
scatter and collimator blurring. Further the image quality of motion-corrected images using data-driven motion estimates
was compared to that obtained using the external motion-tracking system in acquisitions of anthropomorphic phantoms
and patient studies in a real clinical setting. Pattern intensity and Normalized Mutual Information cost functions were
observed to have the best performance in terms of lowest average position error and stability with degradation of image
quality of the partial reconstruction in simulations and anthropomorphic phantom acquisitions. In patient studies,
Normalized Mutual Information based data-driven estimates yielded comparable image quality to that obtained using
external motion tracking.
KEYWORDS: Tumors, Lung, Single photon emission computed tomography, Visualization, Data modeling, Medical imaging, Monte Carlo methods, Tomography, Signal attenuation, Reliability
Reliable human-model observers for clinically realistic detection studies are of considerable interest in medical
imaging research, but current model observers require frequent revalidation with human data. A visual-search
(VS) observer framework may improve reliability by better simulating realistic etection-localization tasks. Under
this framework, model observers execute a holistic search to identify tumor-like candidates and then perform
careful analysis of these candidates. With emission tomography, anatomical noise in the form of elevated uptake
in neighboring tissue often complicates the task. Some scanning model observers simulate the human ability to
read around such noise by presubtracting the mean normal background from the test image, but this backgroundknown-
exactly (BKE) assumption has several drawbacks. The extent to which the VS observer can overcome
these drawbacks was investigated by comparing it against humans and a scanning observer for detection of
solitary pulmonary nodules in a simulated SPECT lung study. Our results indicate that the VS observer offers
a robust alternative to the scanning observer for modeling humans.
KEYWORDS: Magnetic resonance imaging, Monte Carlo methods, Single photon emission computed tomography, 3D modeling, Motion models, Signal attenuation, Electrocardiography, Chest, Heart, Motion estimation
Patient motion can cause artifacts, which can lead to difficulty in interpretation. The purpose of this study is to create 3D
digital anthropomorphic phantoms which model the location of the structures of the chest and upper abdomen of human
volunteers undergoing a series of clinically relevant motions. The 3D anatomy is modeled using the XCAT phantom and
based on MRI studies. The NURBS surfaces of the XCAT are interactively adapted to fit the MRI studies. A detailed
XCAT phantom is first developed from an EKG triggered Navigator acquisition composed of sagittal slices with a 3 x 3
x 3 mm voxel dimension. Rigid body motion states are then acquired at breath-hold as sagittal slices partially covering
the thorax, centered on the heart, with 9 mm gaps between them. For non-rigid body motion requiring greater sampling,
modified Navigator sequences covering the entire thorax with 3 mm gaps between slices are obtained. The structures of
the initial XCAT are then adapted to fit these different motion states. Simultaneous to MRI imaging the positions of
multiple reflective markers on stretchy bands about the volunteer's chest and abdomen are optically tracked in 3D via
stereo imaging. These phantoms with combined position tracking will be used to investigate both imaging-data-driven
and motion-tracking strategies to estimate and correct for patient motion. Our initial application will be to cardiacperfusion
SPECT imaging where the XCAT phantoms will be used to create patient activity and attenuation distributions
for each volunteer with corresponding motion tracking data from the markers on the body-surface. Monte Carlo methods
will then be used to simulate SPECT acquisitions, which will be used to evaluate various motion estimation and
correction strategies.
We use scanning model observers to predict human performance in lesion search/detection study. The observer's
task is to locate gallium-avid tumors in simulated SPECT images of a digital phantom. The goal of our model is to
predict the optimal prior strength β for human observers of smoothing priors incorporated into the reconstruction
algorithm. These priors use varying amounts of anatomical knowledge. We present results from a scanning
channelized non-prewhitening matched filter, and compare them with results from a human-observer study.
Including a step to mimic the greyscale perceptual-linearization used during the human-observer study improves
the accuracy of the model. However we find that for lesions close to an organ boundary even the improved model
does not accurately predict human performance.
We investigate the use of linear model observers to predict human performance in a localization ROC (LROC)
study. The task is to locate gallium-avid tumors in simulated SPECT images of a digital phantom. Our study is
intended to find the optimal strength of smoothing priors incorporating various degrees of anatomical knowledge.
Although humans reading the images must perform a search task, our models ignore search by assuming the lesion
location is known. We use area under the model ROC curve to predict human area under the LROC curve. We
used three models, the non-prewhitening matched filter (NPWMF), the channelized nonprewhitening (CNPW),
and the channelized Hotelling observer (CHO). All models have access to noise-free reconstructions, which are
used to compute the signal template. The NPWMF model does a poor job of predicting human performance.
The CNPW and CHO model do a somewhat better job, but still do not qualitatively capture the human results.
None of the models accurately predicts the smoothing strength which maximizes human performance.
The purpose of this work was to test procedures for applying scanning model observers in order to predict human-observer
lesion-detection performance with hybrid images. Hybrid images consist of clinical backgrounds with
simulated abnormalities. The basis for this investigation was detection and localization of solitary pulmonary
nodules (SPN) in SPECT lung images, and our overall goal has been to determine the extent to which detection
of SPN could be improved by proper modeling of the acquisition physics during the iterative reconstruction
process. Towards this end, we conducted human-observer localization ROC (LROC) studies to optimize the
number of iterations and the postfiltering of four rescaled block-iterative (RBI) reconstruction strategies with
various combinations of attenuation correction (AC), scatter correction (SC), and system-resolution correction
(RC). This observer data was then used to evaluate a scanning channelized nonprewhitening model observer.
A standard "background-known-exactly" (BKE) task formulation overstated the prior knowledge and training
that human observers had about the hybrid images. Results from a quasi-BKE task that preserved some degree
of structural noise in the detection task demonstrated better agreement with the humans.
Patient motion during single photon emission computed tomographic (SPECT) acquisition causes inconsistent
projection data and reconstruction artifacts which can significantly affect diagnostic accuracy. We have investigated use
of the Polaris stereo infrared motion-tracking system to track 6-Degrees-of-Freedom (6-DOF) motion of spherical
reflectors (markers) on stretchy bands about the patient's chest and abdomen during cardiac SPECT imaging. The
marker position information, obtained by opposed stereo infrared-camera systems, requires processing to correctly
record tracked markers, and map Polaris co-ordinate data into the SPECT co-ordinate system. One stereo camera views
the markers from the patient's head direction, and the other from the patient's foot direction. The need for opposed
cameras is to overcome anatomical and geometrical limitations which sometimes prevent all markers from being seen
by a single stereo camera. Both sets of marker data are required to compute rotational and translational 6-DOF motion
of the patient which ultimately will be used for SPECT patient-motion corrections. The processing utilizes an algorithm
involving least-squares fitting, to each other, of two 3-D point sets using singular value decomposition (SVD) resulting
in the rotation matrix and translation of the rigid body centroid. We have previously demonstrated the ability to monitor
multiple markers for twelve patients viewing from the foot end, and employed a neural network to separate the periodic
respiratory motion component of marker motion from aperiodic body motion. We plan to initiate routine 6-DOF
tracking of patient motion during SPECT imaging in the future, and are herein evaluating the feasibility of employing
opposed stereo cameras.
Patient motion during SPECT acquisition causes inconsistent projection data and reconstruction artifacts which can significantly affect the diagnostic accuracy of SPECT. The tracking of motion by infrared monitoring spherical reflectors (markers) on the patient's surface can provide 6-Degrees-of-Freedom (6-DOF) motion information capable of providing clinically robust correction. Object rigid-body motion can be described by 3 translational DOF and 3 rotational DOF. Polaris marker position information obtained by stereo infrared cameras requires algorithmic processing to correctly record the tracked markers, and to calibrate and map Polaris co-ordinate data into the SPECT co-ordinate system. Marker data then requires processing to determine the rotational and translational 6-DOF motion to ultimately be used for SPECT image corrections. This processing utilizes an algorithm involving least-squares fitting, to each other, of two 3-D point sets using singular value decomposition (SVD) resulting in the rotation matrix and translation of the rigid body centroid. We have demonstrated the ability to monitor 12 clinical patients as well as 7 markers on 2 elastic belts worn by a volunteer while intentionally moving, and determined the 3 axis Euclidian rotation angles and centroid translations. An anthropomorphic phantom with Tc-99m added to the heart, liver, and body was simultaneously SPECT imaged and motion tracked using 4 rigidly mounted markers. The determined rotation matrix and translation information was used to correct the image resulting in virtually identical "no motion" and "corrected" images. We plan to initiate routine 6-DOF tracking of patient motion during SPECT imaging in the future.
Respiratory motion degrades image quality in PET and SPECT imaging. Patient specific information on the motion of structures such as the heart if obtained from CT slices from a dual-modality imaging system can be employed to compensate for motion during emission reconstruction. The CT datasets may not be contrast enhanced. Since each patient may have 100-120 coronal slices covering the heart, an automated but accurate segmentation of the heart is important. We developed and implemented an algorithm to segment the heart in non-contrast CT datasets. The algorithm has two steps. In the first step we place a truncated-ellipse curve on a mid-slice of the heart, optimize its pose, and then track the contour through the other slices of the same dataset. During the second step the contour points are drawn to the local edge points by minimizing an distance measure. The segmentation algorithm was tested on 10 patients and the boundaries were determined to be accurate to within 2 mm of the visually ascertained locations of the borders of the heart. The segmentation was automatic except for initial placement of the first truncated-ellipse and for having to re-initialize the contour for 3 patients for less than 3% (1-3 slices) of the coronal slices of the heart. These end-slices constituted less than 0.3% of the heart volume.
KEYWORDS: Data modeling, Performance modeling, Signal detection, Tumors, Mathematical modeling, Single photon emission computed tomography, 3D modeling, Monte Carlo methods, Image resolution, Device simulation
We have investigated whether extensions of linear model observers
can predict human performance in a localization ROC (LROC) study.
The specific task was detection of gallium-avid tumors in SPECT
images of a mathematical phantom, and the study was intended to
quantify the effect of improved detector energy resolution on
scatter-corrected images. The basis for our model observers is the
latent perception measurement postulated for the LROC model. This
measurement is obtained by cross-correlating the image with a
kernel, and the LROC rating and localization data are the max and
argmax, respectively, of this measurement made at all relevant
search locations. The particular model observers tested were the
nonprewhitening (NPW), channelized NPW (CNPW), and channelized
Hotelling (CH) observers. Specification of the observer's search
region was also part of the task definition, and several variations
were considered that could approximate the training of human
observers. The best agreement with the human observers was found
with the CNPW observer, suggesting that the ability of human observers
to prewhiten images may be degraded when the detection task requires
signal localization.
We developed an iterative reconstruction method for SPECT which uses list-mode data instead of binned data. It uses a more accurate model of the collimator structure. The purpose of the study was to evaluate the resolution recovery and to compare its performance to other iterative resolution recovery methods in the case of high noise levels The source distribution is projected onto an intermediate layer. Doing this we obtain the complete emission radiance distribution as an angular sinogram. This step is independent of the acquisition system. To incorporate the resolution of the system we project the individual list-mode events over the collimator wells to the intermediate layer. This projection onto the angular sinogram will define the probability a photon from the source distribution will reach this specific location on the surface of the crystal, thus being accepted by the collimator hole. We compared the SPECT list-mode reconstruction to MLEM, OSEM and RBI. We used Gaussian shaped point sources with different FWHM at different noise levels. For these distributions we calculated the reconstructed images at different number of iterations. The modeling of the resolution in this algorithm leads to a better resolution recovery compared to other methods, which tend to overcorrect.
KEYWORDS: Monte Carlo methods, Data modeling, Statistical analysis, Lithium, Data acquisition, Diagnostics, Target detection, Medical imaging modalities, Receivers, Reconstruction algorithms
We conducted a series of Monte Carlo simulations to investigate how hypothesis testing for modality effects in multireader localization ROC (LROC) studies is influenced by case effects. One specific goal was to evaluate for LROC studies the Dorfman-Berbaum-Metz method of analyzing case effects in reader data acquired from a single case-set. Previous evaluations with ROC study simulations found the DBM method to be moderately conservative. Our simulations, using procedures adapted from those earlier works, showed the DBM method to be a conservative test of modality effect in LROC studies as well. The degree of conservatism was greater for a critical value of (alpha) equals0.05 than for (alpha) equals0.01, and was not moderated by increased numbers of readers or cases. Other simulations investigated the tradeoff between power and empirical type-I error rate for the DBM method and two standard hypothesis tests. Besides the DBM method, a two-way analysis of variance (ANOVA) was applied to performance indices based on the LROC curve under an assumption of negligible case effects. The third test was a three-way ANOVA applied to performance indices, which required two sets of images per modality. With (alpha) equals0.01, the DBM method outperformed the other tests for studies with low numbers of readers and cases. In most other situations, its performance lagged behind that of the other tests.
Previous investigations into time-of-flight positron emission tomography (TOFPET) have shown that stochastic noise in images can be reduced when the reconstruction process accounts for the differences in detection times of coincidence photons. Among the factors that influence this reduction are the sensitivity and the spatial and temporal resolutions of the TOFPET detectors. Within the framework of a simplified time- of-flight imaging model, we have considered the effect of these factors on task performance for human observers. The task was detection of mediastinal 'hot' tumors in simulated images of the chest. There were 14 simulated TOFPET systems and 2 simulated PET systems considered. Image reconstruction was performed using filtered backprojection (FBP) for PET and a modified FBP for TOFPET. Localization receiver operating characteristic (LROC) methodology, in which the observers must detect and locate the tumors, was used. The LROC study gives insight into how TOFPET detector characteristics might improve in order to make possible observer task performance on a par with PET. A comparison of our results to a theoretical result from the literature was also conducted.
KEYWORDS: Heart, Signal attenuation, Motion models, Single photon emission computed tomography, Mathematical modeling, Blood, Imaging systems, 3D modeling, Monte Carlo methods, Sensors
This manuscript documents the alteration of the heart model of the MCAT phantom to better represent cardiac motion. The objective of the inclusion of motion was to develop a digital simulation of the heart such that the impact of cardiac motion on single photon emission computed tomography (SPECT) imaging could be assessed and methods of quantitating cardiac function could be investigated. The motion of the dynamic MCAT's heart is modeled by a 128 time frame volume curve. Eight time frames are averaged together to obtain a gated perfusion acquisition of 16 time frames and ensure motion within every time frame. The position of the MCAT heart was changed during contraction to rotate back and forth around the long axis through the center of the left ventricle (LV) using the end systolic time frame as turning point. Simple respiratory motion was also introduced by changing the orientation of the heart model in a 2 dimensional (2D) plane with every time frame. The averaging effect of respiratory motion in a specific time frame was modeled by randomly selecting multiple heart locations between two extreme orientations. Non-gated perfusion phantoms were also generated by averaging over all time frames. Maximal chamber volumes were selected to fit a profile of a normal healthy person. These volumes were changed during contraction of the ventricles such that the increase in volume in the atria compensated for the decrease in volume in the ventricles. The myocardium were modeled to represent shortening of muscle fibers during contraction with the base of the ventricles moving towards a static apex. The apical region was modeled with moderate wall thinning present while myocardial mass was conserved. To test the applicability of the dynamic heart model, myocardial wall thickening was measured using maximum counts and full width half maximum measurements, and compared with published trends. An analytical 3D projector, with attenuation and detector response included, was used to generate radionuclide projection data sets. After reconstruction a linear relationship was obtained between maximum myocardial counts and myocardium thickness, similar to published results. A numeric difference in values from different locations exist due to different amounts of attenuation present. Similar results were obtained for FWHM measurements. Also, a hot apical region on the polar maps without attenuation compensation turns into an apical defect with attenuation compensation. The apical decrease was more prominent in ED than ES due to the change in the partial volume effect. Both of these agree with clinical trends. It is concluded that the dynamic MCAT (dMCAT) phantom can be used to study the influence of various physical parameters on radionuclide perfusion imaging.
We have analytically derived expressions which, for high signal-to-noise ratio (SNR), approximate the population mean images and covariance matrices of both ordered-subset expectation-maximization (OS-EM) and rescaled block- iterative expectation-maximization (RBI-EM) reconstructed images, using a theoretical-formulation strategy similar to that previously outlined for maximum-likelihood expectation- maximization (ML-EM). The approximate population mean images and approximate population covariance matrices were calculated at various iteration numbers for the two reconstruction methods. The theoretical formulations were verified by calculating the sample mean images and sample covariance matrices for the two reconstruction methods, at the same iteration numbers, using over 8000 noisy images per method. Subsequently, we compared the approximate population and sample mean images, the approximate population and sample variance images, as well as the approximate population and sample local covariance images for a pixel near the center of a uniformly emitting disk object, for each iteration number and reconstruction method, respectively. The results demonstrated that for each method iteration number, the image produced by reconstructing from noise-free data would be equal to the population mean image to a very close approximation. In addition, the theoretically calculated variance and local covariance images closely matched their respective sample counterparts. Thus the theoretical formulation is an accurate way to predict the population first- and second-order statistics of both OS-EM and RBI-EM reconstructed images, for high SNR.
KEYWORDS: Tumors, Signal to noise ratio, Single photon emission computed tomography, Photons, Monte Carlo methods, Liver, Statistical modeling, Medicine, Imaging systems, Image resolution
The relative rankings of the channelized Hotelling model observer were compared to those of the human observers for the task of detecting 'hot' tumors in simulated hepatic SPECT slices. The signal-to-noise ratios (SNRs) were determined using eighty images for each of three slice locations. The acquisition and processing strategies investigated were: (1) imaging solely primary photons, (2) imaging primary plus scatter within a 20% symmetric energy window for Tc-99m, (3) imaging with primary plus an elevated amount of scatter, (4) energy-spectrum-based scatter compensation of the primary plus scatter acquisitions, and (5) energy-spectrum-based scatter compensation of the acquisitions with an elevated amount of scatter. Both square non-overlapping channels (SQR), and overlapping difference- of-Gaussian channels (DOG) were incorporated into the Hotelling model observer. When the scatter compensation results were excluded, both channelized Hotelling model observers exhibited a strong correlation with the rankings of the human-observers. With the inclusion of the scatter compensation results, only with the DOG model observer was the null-hypothesis of no correlation rejected at the p equals 0.05 level. It is concluded that further investigation of the channel model used with the Hotelling observer is indicated to determine if better correlation can be obtained.
This study investigates the differences in both image texture and numerical performance among various filtering methods used in filtered back-projection reconstructions, including Butterworth, variable conductance diffusion (VCD), and a combination approach of the two. A (chi) 2 Butterworth method is proposed to be used for 2-D prefiltering on the projection data and a 3-D VCD method is then applied post-reconstruction. The combination approach has the smooth boundaries a VCD method cannot normally obtain while minimizing the severe ringings normally incurred by a Butterworth with low cut-off. Use of (chi) 2 criterion in Butterworth provides a reasonably good starting point for VCD, which then only needs a few iterations to reach a desirable degree of smoothness. The use of Butterworth filtering before VCD also results in less noisy estimates of the parameters which control VCD. Thus, the combination of global smoothing with the Butterworth, and local smoothing with VCD is much faster than VCD alone and preserves the desirable properties of each.
By accurately modeling the physics of photon transport into the projection and backprojection operations, iterative SPECT reconstruction methods can reduce the degrading effects of scatter, attenuation and the non-stationary spatial resolution of the camera. Unfortunately, iterative reconstruction methods have required very long computation times, predominately due to the complexity involved in modeling these degrading effects into the projection and backprojection operations. In this study, we describe an approach which allows SPECT iterative reconstruction algorithms to be implemented with a reduction in the number of computations needed. The idea is to pre-process the measured projection data to compensate for scatter and attenuation, as well as to transform the projection data to those which would have been obtained with a stationary system resolution. Results of simulation studies indicate that preprocessing the measured projection data reduces the number of computations needed to perform the projection and backprojection operations, and yields reconstructions which differ minimally from those obtained using the slower standard iterative approach of modeling both photon attenuation and nonstationary blurring in the projection and backprojection steps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.