Dual-source CT can provide high temporal resolution single energy imaging, but this is sacrificed in dual-energy mode (DS-DE). The current study evaluates a dual-source photon-counting-detector (DS-PCD) CT capable of high temporal resolution (66 ms) dual-energy imaging. A rod simulating coronary artery motion was scanned on both DS-DE and DS-PCD CT scanners and image quality was evaluated. Spatial registration—quantified by dice coefficient—between the high and low energy images was better with DS-PCD than DS-DE. Furthermore, the rod had a more circular appearance and its diameter was more accurate in iodine maps generated by DS-PCD whereas DS-DE suffered from motion artifacts.
Motion artifact is a major challenge in cardiac CT which hampers accurate delineation of key anatomic (e.g. coronary lumen) and pathological features (e.g. stenosis). Conventional motion correction techniques are limited on patients with high / irregular heart rate, due to simplified modeling of CT systems and cardiac motion. Emerging deep learning based cardiac motion correction techniques have demonstrated the potential of further quality improvement. Yet, many methods require CT projection data or advanced motion simulation tools that are not readily available. We aim to develop an image-domain motion-correction method, using convolutional neural network (CNN) integrated with customized attention and spatial transformer techniques. Forty cardiac CT exams acquired from a clinical dual-source CT system were retrospectively collected to generate training (n=26) and testing (n=14) sets. Dual-source data uniquely allow image reconstruction with different temporal resolutions from the same patient scan. Slow temporal resolution (140ms; equivalent to single-source CT (SSCT) half scan) and fast temporal resolution (75ms; dual source) images were reconstructed to generate paired samples of motion-corrupted and reference images. The combinations of 2 training inference strategies and 3 CNNs were evaluated: strategy #1 – whole-heart images in training / inference; strategy #2 – vessel patches in training / inference; CNN #1 – attention only; CNN #2 – spatial-transformer (STN) only; CNN #3 – attention and STN synergy. Testing data showed that CNN #3 with strategy #2 provided relatively better performance: improving vessel delineation, increasing structural similarity index from 0.85 to 0.91, and reducing mean CT number error of lumen by 71.0%. Our method could improve the image quality in cardiac exams with SSCT.
The interest of our study is the in-vivo transcranial visualization of blood flow without removal of the skull. The strong attenuation, scattering, and distortion by the skull bones (or other tissues) make it difficult to use currently existing methods. However, blood flow can still be detected by using the ultrasonic speckle reflections from the blood cells and platelets (or contrast agents) moving with the blood. The methodology specifically targets these random temporal changes, imaging the owing region and eliminating static components. This process analyzed over multiple exposures allows an image of the blood flow to be obtained, even with negative acoustic effects of the skull in play. Experimental results show this methodology is able to produce both 2D and 3D images of the owing region, and eliminates those regions of static acoustic sources as predicted. Images produced of the owing region are found to agree with the physical size of the vessel analogues, and also found to provide a qualitative measure on the amount of flow through the vessels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.