Coronary computed tomography angiography (CCTA) is a major clinical imaging technique used to diagnose cardiovascular diseases. To improve diagnostic accuracy, it is necessary to determine the most optimal reconstruction phase that has the best image quality with little or no motion artifacts. The end-systolic phase and end-diastolic phase are the two commonly used phases for image reconstruction, but they are not always optimal in terms of motion-based image quality. In this paper we propose a deep learning method to automatically select an optimal phase based on a set of 2D axial phase image reconstructions. We select the right coronary artery (RCA) as our main vessel of interest to analyze reconstruction quality. Two deep convolutional neural networks are developed to perform efficient heart region segmentation and RCA localization without manually performing patient-specific image segmentation. We also demonstrate how to calculate image entropy as a figure of merit to evaluate RCA reconstruction quality. Results based on real clinical data with a heart rate of 74 beats per minute (bpm) have shown our proposed algorithm can efficiently localize the RCA for arbitrary cardiac phase, and accurately determine the optimal RCA reconstruction with minimal motion artifacts.
Wide-coverage detector CT and ultra-high-resolution (UHR) detector CT are two important features for current cardiac imaging modalities. The former one enables the scanner to finish a whole heart image scan in one bed position; the latter one gives superior resolution in fine structures such as stenoses, calcifications, implemented stents, and small vessel boundaries. However, no commercially available scanner has both these features in one scanner as of today. Herein, we propose to use existing UHR-CT data to train a super resolution (SR) neural network and apply the network in a wide-coverage detector CT system. The purpose of the network is to enhance the system resolution performance and reduce the noise while maintaining its wide-coverage feature without additional hardware changes. Thirteen UHR-CT patient datasets and their simulated-normal-resolution pairs were used for training a 3D residual-block U-Net. The modulation transfer function (MTF) measured from Catphan phantom scans showed the proposed super-resolution aided deep learning-based reconstruction (SR-DLR) improved the MTF resolution by relative ~30% and ~10% as compared to filtered-back projection and model-based iterative reconstruction approaches. In real patient cases, the SR-DLR images show better noise texture and enhanced spatial resolution along with better aortic valve, stent, calcification, and soft tissue features as compared to other reconstruction approaches.
Wide-coverage detector CT and ultra-high-resolution (UHR) detector CT are two important features for current cardiac imaging modalities. The former one enables the scanner to finish a whole heart image scan in one bed position; the latter one gives superior resolution in fine structures such as stenoses, calcifications, implemented stents, and small vessel boundaries. However, no commercially available scanner has both these features in one scanner as of today. Herein, we propose to use existing UHR-CT data to train a super resolution (SR) neural network and apply the network in a widecoverage detector CT system. The purpose of the network is to enhance the system resolution performance and reduce the noise while maintaining its wide-coverage feature without additional hardware changes. Thirteen UHR-CT patient datasets and their simulated-normal-resolution pairs were used for training a 3D residual-block U-Net. The modulation transfer function (MTF) measured from Catphan phantom scans showed the proposed super-resolution aided deep learning-based reconstruction (SR-DLR) improved the MTF resolution by relative ~30% and ~10% as compared to filtered-back projection and model-based iterative reconstruction approaches. In real patient cases, the SR-DLR images show better noise texture and enhanced spatial resolution along with better aortic valve, stent, calcification, and soft tissue features as compared to other reconstruction approaches.
The temporal resolution of x-ray computed tomography (CT) is limited by the scanner rotation speed and detector readout time. One way to reduce the detector readout time is to acquire fewer number of projections. However, reconstruction using sparse-view data could result in spatial resolution loss and reconstruction artifacts that may negatively affect the clinical diagnoses. Therefore, improving the spatial resolution of sparse-view CT (SVCT) is of great practical value. In this study, we proposed a deep learning-based approach for SVCT spatial resolution enhancement. The proposed method utilizes a densely connected convolutional neural network (CNN) that is further aided by a radial location map to recover the radially dependent blurring caused by the continuous rotation of an x-ray source. The proposed method was evaluated using sparse-view data synthesized from full-view projection data of real patients. The results showed that the proposed CNN was able to recover the resolution loss and improve the image quality. Compared with the network using the same main structure but without a radial location map, the proposed method achieved better image quality in terms of the mean absolute error and structure similarity.
Dual-energy CT (DECT) has become increasingly popular in practice due to its unique capability of material differentiation. One typical implementation of DECT is to use fast kV switching acquisition technique, which rapidly alternates the X-ray tube voltage between two predetermined kVs in a frequent manner. However, usage of such technique may be limited in practice, as it typically requires sophisticated hardware of high cost and lacks of dose efficiency due to difficulty in tube current modulation. One possible solution is to reduce the frequency of voltage switching during acquisition. However, this alternative approach may potentially compromise the image quality, as it results in sparse measurements for both kVs. In this paper, we proposed a cascaded deep-learning reconstruction framework for sparse-view kV-switching DECT, where two deep convolutional neural networks were employed in the reconstruction, one completing the missing views in the sinogram space and the other improving image quality in the image space. We demonstrated the feasibility of proposed method using sparse-view kV-switching data simulated from rotate-rotate DECT scans with phantom and clinical data. Experimental results show that the proposed method on sparse-view kV-switching data achieve comparable image quality and quantitative accuracy as compared to traditional method on fully-sampled rotate-rotate data
In conventional CT, it is difficult to generate consistent organ specific noise and resolution with a single reconstruction kernel. Therefore, it is necessary in principle to reconstruct a single scan multiple times using different kernels in order to obtain clinical diagnosis information for different anatomies. In this paper, we provide a deep learning solution which can obtain organ specific noise and resolution balance with one single reconstruction. We propose image reconstruction using a deep convolution neural network (DCNN) trained by a specific feature aware reconstruction target. It integrates desirable features from multiple reconstructions each of which provides optimal noise and resolution tradeoff for one specific anatomy. The performance of our proposed method has been verified with actual clinical data. The results show that our method can outperform standard model based iterative reconstruction (MBIR) by offering consistent noise and resolution properties across different organs using only one single image reconstruction.
In conventional x-ray CT imaging, noise reduction is often applied on raw data to remove noise while improving reconstruction quality. Adaptive data filtering is one noise reduction method that suppresses data noise using a local smooth kernel. The design of the local kernel is important and can greatly affect the reconstruction quality. In this report we develop a deep learning convolutional neural network to help predict the local kernel automatically and adaptively to the data statistics. The proposed network is trained to directly generate kernel parameters and hence allow fast data filtering. We compare our method to the existing filtering method. The results shows that our deep learning based method is more efficient and robust over a variety of scan conditions.
KEYWORDS: Denoising, X-ray computed tomography, Computed tomography, Data modeling, Neural networks, Signal to noise ratio, Image denoising, Medical research
Reducing radiation dose of computed tomography (CT) and thereby decreasing the potential risk to patients are desirable in CT imaging. Deep neural network has been proposed to reduce noise in low-dose CT images. However, the conventional way to train a neural network requires using high-dose CT images as the reference. Recently, a noise-tonoise (N2N) training method was proposed, which showed that a neural network could be trained with only noisy images. In this work, we applied the N2N training to low-dose CT denoising. Our results show that the N2N training works in both count and image domains without using any high-dose reference images.
Reducing the radiation dose of computed tomography (CT) and thereby decreasing the potential risk suffered by the patients is desirable in CT imaging. However, lower dose often results in additional noise and artifacts in reconstructed images that may negatively affect the clinical diagnoses. Recently, many image-domain denoising approaches based on deep learning have been proposed and obtained promising results. However, since reconstructed CT image values are not directly related to noise level, estimating noise level from CT images is not an easy task. In this work, we propose a count-domain denoising approach using a convolutional neural network (CNN) and a filter loss function. Compared with image-domain denoising methods, the proposed count-domain method can easily estimate the noise level in projections based on the measurement in each detector bin. Moreover, because each projection is ramp-filtered before being backprojected to the image-domain, we propose a filter loss function where the training loss is computed using the ramp filtered projection, rather than the original projection. Since the filter loss is closely related to the differences in the image-domain, it further improves the quality of reconstructed CT images.
Statistically based iterative approaches for image reconstruction have gained much attention in medical imaging. An
accurate system matrix that defines the mapping from the image space to the data space is the key to high-resolution
image reconstruction. However, an accurate system matrix is often associated with high computational cost and huge
storage requirement. Here we present a method to address this problem by using sparse matrix factorization and parallel
computing on a graphic processing unit (GPU).We factor the accurate system matrix into three sparse matrices: a sinogram
blurring matrix, a geometric projection matrix, and an image blurring matrix. The sinogram blurring matrix models
the detector response. The geometric projection matrix is based on a simple line integral model. The image blurring
matrix is to compensate for the line-of-response (LOR) degradation due to the simplified geometric projection matrix. The
geometric projection matrix is precomputed, while the sinogram and image blurring matrices are estimated by minimizing
the difference between the factored system matrix and the original system matrix. The resulting factored system matrix
has much less number of nonzero elements than the original system matrix and thus substantially reduces the storage
and computation cost. The smaller size also allows an efficient implement of the forward and back projectors on GPUs,
which have limited amount of memory. Our simulation studies show that the proposed method can dramatically reduce
the computation cost of high-resolution iterative image reconstruction. The proposed technique is applicable to image
reconstruction for different imaging modalities, including x-ray CT, PET, and SPECT.
A method is proposed for a 3D reconstruction of coronary networks from rotational projections that departs from motion-compensated approaches. It deals with multiple views extracted from a time-stamped image sequence through ECG gating. This statistics-based vessel reconstruction method relies on a new imaging model by considering both the effect of background tissues and the image representation using spherically-symmetric basis functions, also called 'blobs'. These blobs have a closed analytical expression for the X-ray transform, which makes easier to compute a cone-beam projection than a voxel-based description. A Bayesian maximum a posteriori (MAP) estimation is used with a Poisson distributed projection data instead of the Gaussian approximation often used in tomography reconstruction. A heavy-tailed distribution is proposed as image prior to take into account the sparse nature of the object of interest. The optimization is performed by an expectation-maximization like (EM) block iterative algorithm which offers a fast convergence and a sound introduction of the non-negativity constraint for vessel attenuation coefficients. Simulations are performed using a model of coronary tree extracted from multidetector CT scanner and a performance study is conducted. They point out that, even with severe angular undersampling (6 projections over 110 degrees for instance) and without introducing a prior model of the object, significant results can be achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.