Based on the X-ray physics in computed tomography (CT) imaging, the linear attenuation coefficient (LAC) of each human tissue is described as a function of the X-ray photon energy. Different tissue types (i.e. muscle, fat, bone, and lung tissue) have their energy responses and bring more tissue contrast distribution information along the energy axis, which we call tissue-energy response (TER). In this study, we propose to use TER to generate virtual monoenergetic images (VMIs) from conventional CT for computer-aided diagnosis (CADx) of lesions. Specifically, for a conventional CT image, each tissue fraction can be identified by the TER curve at the effective energy of the setting tube voltage. Based on this, a series of VMIs can be generated by the tissue fractions multiplying the corresponding TER. Moreover, a machine learning (ML) model is developed to exploit the energy-enhanced tissue material features for differentiating malignant from benign lesions, which is based on the data-driven deep learning (DL)-CNN method. Experimental results demonstrated that the DL-CADx models with the proposed method can achieve better classification performance than the conventional CT-based CADx method from three sets of pathologically proven lesion datasets.
Photon counting spectral CT (PCCT) can produce reconstructed attenuation maps in different energy channels, reflecting energy properties of the scanned object. Due to the limited photon numbers and the non-ideal detector response of each energy channel, the reconstructed images usually contain much noise. With the development of Deep Learning (DL) technique, different kinds of DL-based models have been proposed for noise reduction. However, most of the models require clean data set as the training labels, which are not always available in medical imaging field. Inspiring by the similarities of each channel's reconstructed image, we proposed a self-supervised learning based PCCT image enhancement framework via multi-spectral channels (S 2MS). In S 2MS framework, both the input and output labels are noisy images. Specifically, one single channel image was used as output while images of other single channels and channel-sum image were used as input to train the network, which can fully use the spectral data information without extra cost. The simulation results based on the AAPM Low-dose CT Challenge database showed that the proposed S2MS model can suppress the noise and preserve details more effectively in comparison with the traditional DL models, which has potential to improve the image quality of PCCT in clinical applications.
Based on well-established X-ray physics in computed tomography (CT) imaging, the spectral responses of different materials contained in lesions are different, which brings richer contrast information at various energy bins. Hence, obtaining the material decomposition of different tissue types and exploring its spectral information for lesion diagnosis becomes extremely valuable. The lungs are housed within the torso and consist of three natural materials, i.e., soft tissue, bone, and lung tissue. To benefit the lung nodule differentiation, this study innovatively proposed to use lung tissue as one basis material along with soft tissue and bone. This set of basis materials will yield a more accurate composition analysis of lung nodules and benefit the following differentiation. Moreover, a corresponding machine learning (ML)-based computer-aided diagnosis framework for lung nodule classification is also proposed and used for evaluation. Experimental results show the advantages of the virtual monoenergetic images (VMIs) generated with lung tissue material over the VMIs without lung tissue and conventional CT images in differentiating the malignancy from benign lung nodules. The gain of 9.63% in area under the receiver operating characteristic curve (AUC) scores indicated that the energy-enhanced tissue features from lung tissue have a great potential to improve lung nodule diagnosis performance.
The tissue specific MRF type texture prior (MRFt) proposed in our previous work has been demonstrated to be advantageous in various clinical tasks. However, this MRFt model requires a previous full-dose CT (FdCT) scan of the same patient to extract the texture information for LdCT reconstructions. This requirement may not be met in practice. To alleviate this limitation, we propose to build a MRFt generator by internalizing a database with paired FdCT and LdCT scans using a (conditional) encoder-decoder structure model. We denote this method as the MRFtG-ConED. This generation model depends only on physiological features thus is robust for ultra-low dose CT scans (i.e., dosage < 10mAs). When the dosage is not extremely low (i.e., dosage > 10mAs), some texture information from LdCT images reconstructed by filtered back projection (FBP) can be also used to provide extra information.
There are growing concerns on the effect of the radiation, which can be decreased by reducing X-ray tube current. However, this manner will lead to the degraded image due to the quantum noise. In order to alleviate the problem, multiple methods have been explored both during reconstruction and in post-processing. Recently, Denoising Auto-Encoder(DAE) has drawn much attention which can generate clean images from corrupted input. Inspired by the idea of DAE, during the low dose acquisition, the noisy projection can be regarded as corrupted images. In this paper, we proposed a denoising method based on projection domain. First, the DAE is train from stimulation noisy data coupled with original data. Then utilize the DAE to correct noisy projection and get denoised image from statistical iterative reconstruction. With the implement of DAE in projection domain, the reconstructions show clearer details in soft tissue and have higher SSIM (structural similarity index) than other denoising methods in image domain.
Dual energy cone beam computed tomography (DE-CBCT) can provide more accurate material characterization than conventional CT by taking advantages of two sets of projections with high and low energies. X-ray scatter leads to erroneous values of the DE-CBCT reconstructed images. Moreover, the reconstructed image of DECT is extremely sensitive to noise. Iterative reconstruction methods using regularization are capable to suppress the noise effects and hence improve the image quality. In this paper, we develop an algorithmic scatter correction based on physical model and statistical iterative reconstruction for DE-CBCT. With the assumption that the attenuation coefficients of the soft tissues are relatively stable and uniform and the scatter component is dominated by low frequency signal, scatter components were calculated while updating the reconstructed images in each iteration. Finally, the CBCT image was reconstructed by scatter corrected projections using statistical iterative reconstruction algorithm. Experiment shows that the proposed method can effectively remove the artifacts caused by x-ray scatter. The CT value accuracy in the reconstructed images has been improved.
The X-ray computer tomography (CT) scanner has been extensively used in medical diagnosis. How to reduce radiation dose exposure while maintain high image reconstruction quality has become a major concern in the CT field. In this paper, we propose a statistical iterative reconstruction framework based on structure tensor total variation regularization for low dose CT imaging. An accelerated proximal forward-backward splitting (APFBS) algorithm is developed to optimize the associated cost function. The experiments on two physical phantoms demonstrate that our proposed algorithm outperforms other existing algorithms such as statistical iterative reconstruction with total variation regularizer and filtered back projection (FBP).
The high utility and wide applicability of x-ray imaging has led to a rapidly increased number of CT scans over the past
years, and at the same time an elevated public concern on the potential risk of x-ray radiation to patients. Hence, a hot
topic is how to minimize x-ray dose while maintaining the image quality. The low-dose CT strategies include modulation
of x-ray flux and minimization of dataset size. However, these methods will produce noisy and insufficient projection
data, which represents a great challenge to image reconstruction. Our team has been working to combine statistical
iterative methods and advanced image processing techniques, especially dictionary learning, and have produced
excellent preliminary results. In this paper, we report recent progress in dictionary learning based low-dose CT
reconstruction, and discuss the selection of regularization parameters that are crucial for the algorithmic optimization.
The key idea is to use a “balancing principle” based on a model function to choose the regularization parameters during
the iterative process, and to determine a weight factor empirically for address the noise level in the projection domain.
Numerical and experimental results demonstrate the merits of our proposed reconstruction approach.
Statistical CT reconstruction using penalized weighted least-squares(PWLS) criteria can improve image-quality in low-dose CT reconstruction. A suitable design of regularization term can benefit it very much. Recently, sparse representation based on dictionary learning has been treated as the regularization term and results in a high quality reconstruction. In this paper, we incorporated a multiscale dictionary into statistical CT reconstruction, which can keep more details compared with the reconstruction based on singlescale dictionary. Further more, we
exploited reweigted l1 norm minimization for sparse coding, which performs better than I norm minimization
in locating the sparse solution of underdetermined linear systems of equations. To mitigate the time consuming process that computing the gradiant of regularization term, we adopted the so-called double surrogates method to accelerate ordered-subsets image reconstruction. Experiments showed that combining multiscale dictionary and reweighted l1 norm minimization can result in a reconstruction superior to that bases on singlescale dictionary and l1 norm minimization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.