In regularized PET image reconstruction, the performance of a reconstruction algorithm depends crucially on the smoothing parameter that controls the balance between the likelihood term and the regularization term. In this work, we propose a new method of tuning the smoothing parameter using deep learning. Unlike the traditional estimation-theoretic approach where the smoothing parameter is estimated directly from the observed noisy data, the deep learning-based method requires large amounts of prior training pairs, which are usually unavailable in routine clinical practice. To overcome this problem, we extend our previous work on the training-set approach which provides a mathematical formula to calculate the smoothing parameter for the simple quadratic spline regularizer. We note that this training-set approach is effective only when the noiseless representative exemplars are available. For deep learning, however, collecting large amounts of such noiseless exemplars for a training dataset is unrealizable. Therefore, instead of collecting them, we generate diverse images similar to the exemplars of the ground truth radiotracer distribution using Gibbs sampling followed by post-processing and calculate the smoothing parameter value for each image. For our deep learning model, we use the residual network architecture that allows deeper layers with higher efficiency than the typical convolutional neural network. The experimental results show that our deep learning method provides optimal values of the smoothing parameter, which are comparable to the accurate values calculated from noiseless exemplars, for a wide range of underlying images.
Patch-based regularization methods, which have proven useful not only for image denoising, but also for tomographic reconstruction, penalize image roughness based on the intensity differences between two nearby patches. However, when two patches are not considered to be similar in the general sense of similarity but still have similar features in a scaled domain after normalizing the two patches, the difference between the two patches in the scaled domain is smaller than the intensity difference measured in the standard method. Standard patch-based methods tend to ignore such similarities due to the large intensity differences between the two patches. In this work, for patch-based penalized likelihood tomographic reconstruction, we propose a new approach to the similarity measure using the normalized patch differences as well as the intensity-based patch differences. A normalized patch difference is obtained by normalizing and scaling the intensity-based patch difference. To selectively take advantage of the standard patch (SP) and normalized patch (NP), we use switching schemes that can select either SP or NP based on the gradient of a reconstructed image. In this case the SP is selected for restoring large-scaled piecewise-smooth regions, while the NP is selected for preserving the contrast of fine details. The numerical experiments using software phantom demonstrate that our proposed methods not only improve overall reconstruction accuracy in terms of the percentage error, but also reveal better recovery of fine details in terms of the contrast recovery coefficient.
We present an adaptive method of selecting the center weight in the weighted-median prior for penalized-likelihood (PL)
transmission tomography reconstruction. While the well-known median filter, which is closely related to the median
prior, preserves edges, it is known to have an unfortunate effect of removing fine details because it tends to eliminate
any structure that occupies less than half of the window elements. On the other hand, center-weighted median filters can
preserve fine details by using relatively large center weights. But the large center weights can degrade monotonic
regions due to insufficient noise suppression. In this work, to adaptively select the center weight, we first calculate pixelwise
standard deviation over 3×3 neighbors of each pixel at every PL iteration and measure its cumulative histogram,
which is a monotonically non-decreasing 1-D function. We then normalize the resulting function to maintain its range
over [1,9]. In this case the domain of the normalized function represents the standard deviation at each pixel, and the
range can be used for the center weight of a 3×3 median window. We implemented the median prior within the PL
framework and used an alternating joint minimization algorithm based on a separable paraboloidal surrogates algorithm.
The experimental results demonstrate that our proposed method not only compromises the two extreme cases (the largest
and smallest center weights) yielding a good reconstruction over the entire image in terms of the percentage error, but
also outperforms the standard method in terms of the contrast recovery coefficient measured in several regions of interest.
We propose a new method to acquire three-dimensional tomographic images of a large object from a dental panoramic
X-ray scanner which was originally designed to produce a panoramic image of the teeth and jaws on a single frame. The
method consists of two processes; (i) a new acquisition scheme to acquire the tomographic projection data using a
narrow detector, and (ii) a dedicated model-based iterative technique to reconstruct images from the acquired projection
data. In conventional panoramic X-ray scanners, the suspension arm that holds the X-ray source and the narrow detector
has two moving axes of the angular movement and the linear movement. To acquire the projection data of a large object,
we develop a new data acquisition scheme that can emulate an acquisition of the projectional view in a large detector by
stitching narrow projection images, each of which is formed by a narrow detector, and design a trajectory to move the
suspension arm accordingly. To reconstruct images from the acquired projection data, an accelerated model-based
iterative reconstruction method derived from the ordered subset convex maximum-likelihood expectation-maximization
algorithm is used. In this method each subset of the projection data is constructed by collecting narrow projection images
to form emulated tomographic projectional views in a large detector. To validate the performance of the proposed
method, we tested with a real dental panoramic X-ray system. The experimental results demonstrate that the new method
has great potential to enable existing panoramic X-ray scanners to have an additional CT’s function of providing useful
tomographic images.
We propose a new nonlocal regularization method for PET image reconstruction with the aid of high-resolution
anatomical images. Unlike conventional reconstruction methods using prior anatomical information, our method using
nonlocal regularization does not require additional processes to extract anatomical boundaries or segmented regions. The
nonlocal regularization method applied to anatomy-based PET image reconstruction is expected to effectively reduce the
error that often occurs due to signal mismatch between the PET image and the anatomical image. We also show that our
method can be useful for enhancing the image resolution. To reconstruct the high-resolution image that represents the
original underlying source distribution effectively sampled at a higher spatial sampling rate, we model the underlying
PET image on a higher-resolution grid and perform our nonlocal regularization method with the aid of the side
information obtained from high-resolution anatomical images. Our experimental results demonstrate that, compared to
the conventional method based on local smoothing, our nonlocal regularization method enhances the resolution as well
as the reconstruction accuracy even with the imperfect prior anatomical information or in the presence of signal
mismatch between the PET image and the anatomical image.
This paper describes the development of rapid 3-D regularized EM (expectation maximization) reconstruction methods
for Compton cameras using commodity graphics hardware. Since the size of the system matrix for a typical Compton
camera is extremely large, it is impractical to use a caching scheme that reads pre-stored values of the elements of the
system matrix instead of repeatedly calculating conical projection and backprojection which are the most time
consuming operations. In this paper we propose GPU (graphics processing unit) accelerated methods that can rapidly
perform conical projection and backprojection on the fly. Since the conventional ray-based backprojection method is
inefficient for GPU, we develop fully voxel-based conical backprojection methods using two different approaches. In the
first approach, we approximate the intersecting chord length of the ray passing through a voxel with the normal distance
from the center of the voxel to the ray. In the second approach, each voxel is regarded as a dimensionless point, and the
backprojection is performed without the need for calculating intersecting chord lengths. Our experimental studies with
the M-BSREM (modified block sequential regularized EM) algorithm show that GPU-based methods significantly
outperforms the conventional CPU-based method in computation time without a considerable loss of reconstruction
accuracy.
We investigate performance of a convex nonquadratic (CNQ) spline regularization method applied to limited-angle tomography reconstruction. Since limited-angle data lack projections over a certain range of view angles, they produce poor reconstructions with streak artifacts and geometric distortions. To obtain a good solution, a feasible prior that can eliminate or reduce artifacts and distortions is necessary. The CNQ prior used in this paper is expressed as a linear combination of the first- and the second-order spatial derivatives and applied to a CNQ penalty function. To determine a solution efficiently, we use the fast globally convergent block sequential regularized expectation maximization algorithm. Our experimental results demonstrate that the hybrid CNQ spline prior outperforms conventional nonquadratic priors in eliminating limited-angle artifacts.
Since algorithms based on Bayesian approaches contain smoothing parameters associated with the mathematical model for the prior probability, the performance of algorithms usually depends crucially on the values of these parameters. We consider an approach to smoothing parameter estimation for Bayesian methods used in the medical imaging application of emission computed tomography (ECT). We address spline models as Gibbs smoothing priors for our own application to ECT reconstruction. The problem of smoothing parameter estimation can be stated as follows. Given a likelihood and prior model, and given a realization of noisy projection data, compute some optimal estimate of the smoothing parameter. We focus on the estimation of the smoothing parameter for mathematical phantom studies. Among the variety of approaches used to attack this problem, we base our maximum-likelihood (ML) estimates of smoothing parameters on observed training data. To validate our ML approach, we first perform closed-loop numerical experiments using the images created by Gibbs sampling from the given prior probability with the smoothing parameter known. We then evaluate the performance of our method using mathematical phantoms and show that the optimal estimates obtained from training data yield good reconstructions in terms of a percentage error metric.
Since algorithms based on Bayesian approaches contain hyperparameters associated with the mathematical
model for the prior probability, the performance of algorithms usually depends crucially on the values of these
parameters. In this work we consider an approach to hyperparameter estimation for Bayesian methods used in
the medical imaging application of emission computed tomography (ECT). We address spline models as Gibbs
smoothing priors for our own application to ECT reconstruction. The problem of hyperparameter (or smoothing
parameter in our case) estimation can be stated as follows: Given a likelihood and prior model, and given a
realization of noisy projection data from a patient, compute some optimal estimate of the smoothing parameter.
Among the variety of approaches used to attack this problem in ECT, we base our maximum-likelihood (ML)
estimates of smoothing parameters on observed training data, and argue the motivation for this approach. To
validate our ML approach, we first perform closed-loop numerical experiments using the images created by Gibbs
sampling from the given prior probability with the smoothing parameter known. We then evaluate performance
of our method using mathematical phantoms and show that the optimal estimates yield good reconstructions.
Our initial results indicate that the hyperparameters obtained from training data perform well with regard to
percentage error metric.
We introduce fast image reconstruction algorithms for emission tomography, which provide not only edge-preserved reconstructions, but also their edge maps simultaneously. To explicitly model the existence of edges, we use the binary line-process model, which is incorporated as a Gibbs prior in the context of a Bayesian maximum a posteriori framework. To efficiently handle the problem of mixed continuous and binary variable objectives, we use a deterministic annealing (DA) algorithm. Since the DA algorithm is computer-intensive and requires many iterations to converge, we apply a block-iterative method derived from the well-known ordered-subset principle. The block-iterative DA algorithm processes the data in blocks within each iteration, thereby accelerating the convergence speed of the standard DA algorithm by a factor proportional to the number of blocks. Our experimental results indicate that, with moderate numbers of blocks and properly chosen hyperparameters, the accelerated DA algorithm provides good reconstructions as well as edge maps with only a few iterations.
We introduce a block iterative method to accelerate edge-preserving Bayesian reconstruction algorithms for emission tomography. Most common Bayesian approaches to tomographic reconstruction involve assumptions on the local spatial characteristics of the underlying source. To explicitly model the existence of anatomical boundaries, the line-process model has been often used. The unobservable binary line processes in this case acts to suspend smoothness constraints at sites where they are turned on. Deterministic annealing (DA) algorithms are known to provide an efficient means of handling the problems associated with mixed continuous and binary variable objectives. However, they are still computer intensive and require many iterations to converge. In this work, to further improve the DA algorithm by accelerating its convergence speed, we use a block-iterative (BI) method, which is derived from the ordered subset algorithm. The BI-DA algorithm processes the data in blocks within each iteration, thereby accelerating the convergence speed of a standard DA algorithm by a factor proportional to the number of blocks. The net conclusion is that, with moderate numbers of blocks and properly chosen hyperparameters, the BI-DA algorithm provides good reconstructions as well as a significant acceleration.
Iterative reconstruction methods, such as the expectation maximization (EM) algorithm and its extended approaches, have played a prominent role in emission computed tomography due to their remarkable advantages over the conventional filtered backprojection method. However, since iterative reconstructions typically are comprised of repeatedly projecting and backprojecting the data, the computational load required for reconstructing an image highly depends
on the performance of the projector-backprojector pair used in the algorithm. In this work we compare quantitative performance of representative methods for implementing projector-backprojector pairs-ray-tracing methods, rotation-based methods, and pixel-driven methods. To reduce the overall cost for the projection-backprojection
operations for each method, we investigate how previously computed results can be reused so that the number of redundant calculations can be minimized. Our experimental results demonstrate that, while the rotation based methods can be useful for simplifying the correction of important physical factors, the computational cost to achieve good accuracy is considerably higher than that of the ray-tracing methods.
This paper presents a Bayesian method for reconstructing transmission images, which provide attenuation correction factors for emission scans. In order to preserve the edges that bound anatomical regions, which are important especially for areas of non-uniform attenuation, we use the line-process model as a prior. Our prior model provides edge maps containing the anatomical boundary information as well as edge preserved reconstructions. To optimize our nonconvex objective function, we use our previously developed deterministic annealing algorithm, in which the energy function is approximated by a sequence of smooth functions that converges uniformly to the original energy function. To accelerate the convergence speed of our algorithm, we apply the ordered subsets principle to the deterministic annealing algorithm. We also show how the smoothing parameter can be adjusted to account for the effects of using ordered subsets so that the degree of smoothness can be retained for variations of the number of subsets. To validate the quantitative performance of our algorithm, we use the quantitation of bias/variance over noise trials. Our preliminary results indicate that, in some circumstances, our methods have advantages over conventional methods.
The ordered subsets (OS) algorithm1 has enjoyed considerable interest for accelerating the well-known EM reconstruction algorithm for emission tomography and has recently found widespread use in clinical practice. This is primarily due to the fact that, while retaining the advantages of EM, the OS-EM algorithm can be easily implemented by slightly modifying the existing EM algorithm. The OS algorithm has also been applied1 with the one-step-late (OSL) algorithm,2 which provides maximum a posteriori estimation based on Gibbs priors. Unfortunately, however, the OSL approach is known to be unstable when the smoothing parameter that weights the prior relative to the likelihood is relatively large. In this work, we note that the OS principle can be applied to any algorithm that involves calculation of a sum over project indices, and show that it can also be applied to a generalized EM algorithm with useful quadratic priors. In this case, the algorithm is given in the form of iterated conditional modes (ICM), which is essentially a coordinate-wise descent method, and provides a number of important advantages. We also show that, by scaling the smoothing parameter in a principled way, the degree of smoothness is reconstructed images, which appears to vary depending on the number of subsets, can be efficiently matched for different numbers of subsets. Our experimental results indicate that the OS-ICM algorithm along with the method of scaling the smoothing parameter provides robust results as well as a substantial acceleration.
Penalized-likelihood method using Bayesian smoothing priors have formed the core of the development of reconstruction algorithms for emission tomography. In particular, there has been considerable interest in edge-preserving prior models, which are associated with smoothing penalty functions that are nonquadratic functions of nearby pixel differences. Our early work used a higher-order nonconvex prior that imposed piecewise smoothness on the first derivative of the solution to achieve result superior to those obtained using a conventional nonconvex prior that imposed piecewise smoothness on the zeroth derivative. In spite of several advantages of the higher-order model - the weak plate, its use in routine applications has been hindered by several factors, such as the computational expenses to the on convexity of its penalty function and the difficult in the selection of hyperparameters involved in the model. We note that, by choosing a penalty function which is nonquadratic but is still convex, both the problem of nonconvexity involved in some nonquadratic priors and the over smoothness of edge region sin quadratic priors may be avoided. In this paper, we use a class of 2D smoothing splines with first and second spatial derivatives applied to edge-preserving ability, we use the quantitation of bias/variance and total squared error over noise trials using the Monte Carlo method. Our experimental results show that linear combination of low and high orders of spatial derivatives applied to convex-nonquadratic penalty functions improves the reconstruction in terms of total squared error.
In Fourier magnetic resonance imaging (MRI), signals from different positions in space are phase-encoded by the application of a gradient before the total signal from the imaged subject is acquired. In practice, a limited number of the phase-encoded signals are often acquired in order to minimize the duration of the studies and maintain adequate signal-to-noise ratio. However, this results in incomplete sampling in spatial frequency or truncation of the k-space data. The truncated data, when Fourier transformed to reconstruct, give rise to images degraded by limited resolution and ringing near sharp edges, which is known as the truncation artifact. A variety of methods have been proposed to reconstruct images with reduced truncation artifact. In this work, we use a regularization method in the context of a Bayesian framework. Unlike the approaches that operate on the raw data, the regularization approach is applied directly to the reconstructed image. In this framework, the 2D image is modeled as a random field whose posterior probability conditioned on the observed image is represented by the product of the likelihood of the observed data with the prior based on the local spatial structure of the underlying image. Since the truncation artifact appears in only one of the two spatial directions, the use of conventional piecewise-constant constraints may degrade soft edge regions in the other direction that are less affected by the truncation artifact. Here, we consider more elaborate forms of constraints than the conventional piecewise- smoothness constraints, which can capture actual spatial information about the MIR images. In order to reduce the computational cost for optimizing non-convex objective functions, we use a deterministic annealing method. Our experimental results indicate that the proposed method not only reduces the truncation artifact, but also improves tissue regularity and boundary definition without degrading soft edge regions.
Maximum a posteriori approaches in the context of a Bayesian framework have played an important role in SPECT reconstruction. The major advantages of these approaches include not only the capability of modeling the character of the data in a natural way but also the allowance of the incorporation of a priori information. Here, we show that a simple modification of the conventional smoothing prior, such as the membrane prior, to one less sensitive to variations in first spatial derivatives - the thin plate (TP) prior - yields improved reconstructions in the sensor of low bias at little change in variance. Although the nonquadratic priors, such as the weak membrane and the weak plate, can exhibit good performance, they suffer difficulties in optimization and hyperparameter estimation. On the other hand, the thin plate, which is a quadratic prior, leads to easier optimization and hyperparameter estimation. In this work, we evaluate and compare quantitative performance of MM, TP, and FBP algorithms in an ensemble sense to validate advantages of the thin plate model. We also observe and characterize the behavior of the associated hyperparameters of the prior distributions in a systematic way. To incorporate our new prior in a MAP approach, we model the prior as a Gibbs distribution and embed the optimization within a generalized expectation- maximization algorithm. For optimization for the corresponding M-step objective function, we use a version of iterated conditional mode. We show that the use of second- derivatives yields 'robustness' in both bias and variance by demonstrating that TP leads to very low bias error over a large range of smoothing parameter, while keeping a reasonable variance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.