Purpose: To rule out hemorrhage, non-contrast CT (NCCT) scans are used for early evaluation of patients with suspected stroke. Recently, artificial intelligence tools have been developed to assist with determining eligibility for reperfusion therapies by automating measurement of the Alberta Stroke Program Early CT Score (ASPECTS), a 10-point scale with > 7 or ≤ 7 being a threshold for change in functional outcome prediction and higher chance of symptomatic hemorrhage, and hypodense volume. The purpose of this work was to investigate the effects of CT reconstruction kernel and slice thickness on ASPECTS and hypodense volume. Methods: The NCCT series image data of 87 patients imaged with a CT stroke protocol at our institution were reconstructed with 3 kernels (H10s-smooth, H40s-medium, H70h-sharp) and 2 slice thicknesses (1.5mm and 5mm) to create a reference condition (H40s/5mm) and 5 non-reference conditions. Each reconstruction for each patient was analyzed with the Brainomix e-Stroke software (Brainomix, Oxford, England) which yields an ASPECTS value and measure of total hypodense volume (mL). Results: An ASPECTS value was returned for 74 of 87 cases in the reference condition (13 failures). ASPECTS in non-reference conditions changed from that measured in the reference condition for 59 cases, 7 of which changed above or below the clinical threshold of 7 for 3 non-reference conditions. ANOVA tests were performed to compare the differences in protocols, Dunnett’s post-hoc tests were performed after ANOVA, and a significance level of p < 0.05 was defined. There was no significant effect of kernel (p = 0.91), a significant effect of slice thickness (p < 0.01) and no significant interaction between these factors (p = 0.91). Post-hoc tests indicated no significant difference between ASPECTS estimated in the reference and any non-reference conditions. There was a significant effect of kernel (p < 0.01) and slice thickness (p < 0.01) on hypodense volume, however there was no significant interaction between these factors (p = 0.79). Post-hoc tests indicated significantly different hypodense volume measurements for H10s/1.5mm (p = 0.03), H40s/1.5mm (p < 0.01), H70h/5mm (p < 0.01). No significant difference was found in hypodense volume measured in the H10s/5mm condition (p = 0.96). Conclusion: Automated ASPECTS and hypodense volume measurements can be significantly impacted by reconstruction kernel and slice thickness.
This study is an initial investigation into methods to harmonize quantitative imaging (QI) feature values across CT scanners based on image quality metrics. To assess the impact of harmonization on QI features, we: (1) scanned an image quality assessment phantom on three scanners over a wide range of acquisition and reconstruction conditions; (2) from those scans, assessed image quality for each scanner at each acquisition and reconstruction condition; (3) from these assessments, identified a set of parameters for each scanner that yielded similar image quality values (“harmonized condition”); (4) scanned a second phantom with texture (i.e., local variations in attenuation) under the same set of conditions; and (5) extracted QI features and compared values between non-harmonized and harmonized image quality conditions. Quantitative image quality assessments provided contrast to noise ratio (CNR) and modulation transfer function frequency at 50% (MTF f50) values for each scanner and each condition used. A set of harmonized conditions was identified across three CT scanners based on the similarity of CNR and MTF f50. To provide a comparison, several non-harmonized condition sets were identified. From the texture phantom, the standard deviation of the QI feature values (intensity mean and variance, GLCM autocorrelation and cluster tendency, GLDM high and low gray level emphasis) across the three CT systems decreased between 72.8% and 81.1% between the unharmonized and harmonized groups (with exception of intensity mean which showed little difference across scanners). These initial results suggest that selecting protocols that produce similar quantitative image quality metric values across different CT systems can reduce the variance of QI feature values across those systems.
SF-CT-PD is a single-file derivative of the DICOM-CT-PD file format for CT projection data that stores projections within a single DICOM file, stores pixel data detector-row-major, and stores projection-specific parameters as ordered tables within the DICOM header. We compared the performance of SF-CT-PD against DICOM-CT-PD in read speed, disk usage, and network transfer. Cases were sampled from TCIA’s “LDCT-and-Projection-data" dataset and encoded into DICOM-CT-PD and SF-CT-PD representations. Read tests were conducted for four programming languages on hard-disk and solid-state drives. Rsync-based network transfer analysis measured the Ethernet throughput for each format. Accuracy of the implementation was confirmed by analyzing reconstructions and transfer file-checksums for each format. SF-CT-PD was generally more performant in read operations and disk usage. Network throughput was equivalent between the formats, with file-checksums indicating file integrity. Reconstruction accuracy was supported by difference image agreements. SF-CT-PD represents a viable extension of DICOM-CT-PD where a single file is preferred.
We introduce a simple physics-based model of RA-950 emphysema scoring. Our model assumes that the lung is strictly composed of healthy tissue and emphysematous tissue, each described by a single attenuation value and contaminated with Gaussian noise. We show that when combined with curve-fitting, the model can accurately capture change in RA-950 score with respect to image noise and subject breath hold, and accurately compute “true” RA-950 scores (relative to a clinical reference scan) in a cohort of 16 patients. To validate the model, noise realizations of 10 lung screening subjects and 6 COPD patients were created using various combinations of reconstruction parameters and simulated reduced dose acquisitions. Least-squares curve fitting software was utilized to determine the amount of emphysema and the attenuation value of healthy lung tissue for each subject using the model. The derived model provided accurate emphysema scores (difference between model value and clinical reference of < 0.02) in all cases except one. Upon radiologist review of this case, the score derived from our model was deemed more appropriate than RA-950 from the clinical reference scan. The R-squared values were < 0.9 in all cases except one, and < 0.95 in 12 of 16 cases. The case with low R2 value was also reviewed by a radiologist and found to have substantial other disease that violated key model assumptions. The model appears to be robust to breath hold, image noise, and amount of emphysema present, factors that have been found to confound other approaches (such as denoising) to improving emphysema scoring.
Concerns over the risks of radiation dose from diagnostic CT motivated the utilization of low dose CT (LdCT). However, due to the extremely low X-ray photon statistics in LdCT, the reconstruction problem is ill-posed and noisecontaminated. Conventional Compressed Sensing (CS) methods have been investigated to enhance the signal-to-noise ratio of LdCT at the cost of image resolution and low contrast object visibility. In this work, we adapted a flexible, iterative reconstruction framework, termed Plug-and-Play (PnP) alternating direction method of multipliers (ADMM), that incorporated state-of-the-art denoising algorithms into model-based image reconstruction. The PnP ADMM framework is achieved by combining a least square data fidelity term with a regularization term for image smoothness and was solved through the ADMM. An off-the-shelf image denoiser, the Block-Matching 3D-transform shrinkage (BM3D) filter, is plugged in to substitute an ADMM module. The PnP ADMM was evaluated on low dose scans of ACR 464 phantom and two lung screening data sets and is compared with the Filtered Back Projection (FBP), the Total Variation (TV), the BM3D post-processing method, and the BM3D regularization method. The proposed framework distinguished the line pairs at 9 lp/cm resolution on the ACR phantom and the fissure line in the left lung, resolving the same or better image details than FBP reconstruction of higher dose scans with up to 18 times less dose. Compared with conventional iterative reconstruction methods resulting in comparable image noise, the proposed method is significantly better at recovering image details and improving low contrast conspicuity.
Iterative coordinate descent (ICD) is an optimization strategy for iterative reconstruction that is sometimes considered incompatible with parallel compute architectures such as graphics processing units (GPUs). We present a series of modifications that render ICD compatible with GPUs and demonstrate the code on a diagnostic, helical CT dataset. Our reference code is an open-source package, FreeCT ICD, which requires several hours for convergence. Three modifications are used. First, as with our reference code FreeCT ICD, the reconstruction is performed on a rotating coordinate grid, enabling the use of a stored system matrix. Second, every other voxel in the z-is updated direction simultaneously, and the sinogram data is shuffled to coalesce memory access. This increases the parallelism available to the GPU. Third, NS voxels in the xy-plane are updated simultaneously. This introduces possible crosstalk between updated voxels, but because the interaction between non-adjacent voxels is small, small values of NS still converge effectively. We find NS = 16 enables faster reconstruction via greater parallelism, and NS = 256 remains stable but has no additional computational benefit. When tested on a pediatric dataset of size 736x16x14000 reconstructed to a matrix size of 512x512x128 on a single GPU, our implementation of ICD can converge within 10 HU RMS in less than 5 minutes. This suggests that ICD could be competitive with simultaneous update algorithms on modern, parallel compute architectures.
Translation of radiomics into clinical practice requires confidence in its interpretations. This may be obtained via understanding and overcoming the limitations in current radiomic approaches. Currently there is a lack of standardization in radiomic feature extraction. In this study we examined a few factors that are potential sources of inconsistency in characterizing lung nodules, such as 1)different choices of parameters and algorithms in feature calculation, 2)two CT image dose levels, 3)different CT reconstruction algorithms (WFBP, denoised WFBP, and Iterative). We investigated the effect of variation of these factors on entropy textural feature of lung nodules. CT images of 19 lung nodules identified from our lung cancer screening program were identified by a CAD tool and contours provided. The radiomics features were extracted by calculating 36 GLCM based and 4 histogram based entropy features in addition to 2 intensity based features. A robustness index was calculated across different image acquisition parameters to illustrate the reproducibility of features. Most GLCM based and all histogram based entropy features were robust across two CT image dose levels. Denoising of images slightly improved robustness of some entropy features at WFBP. Iterative reconstruction resulted in improvement of robustness in a fewer times and caused more variation in entropy feature values and their robustness. Within different choices of parameters and algorithms texture features showed a wide range of variation, as much as 75% for individual nodules. Results indicate the need for harmonization of feature calculations and identification of optimum parameters and algorithms in a radiomics study.
Quantitative imaging in lung cancer CT seeks to characterize nodules through quantitative features, usually from a region of interest delineating the nodule. The segmentation, however, can vary depending on segmentation approach and image quality, which can affect the extracted feature values. In this study, we utilize a fully-automated nodule segmentation method – to avoid reader-influenced inconsistencies – to explore the effects of varied dose levels and reconstruction parameters on segmentation.
Raw projection CT images from a low-dose screening patient cohort (N=59) were reconstructed at multiple dose levels (100%, 50%, 25%, 10%), two slice thicknesses (1.0mm, 0.6mm), and a medium kernel. Fully-automated nodule detection and segmentation was then applied, from which 12 nodules were selected. Dice similarity coefficient (DSC) was used to assess the similarity of the segmentation ROIs of the same nodule across different reconstruction and dose conditions.
Nodules at 1.0mm slice thickness and dose levels of 25% and 50% resulted in DSC values greater than 0.85 when compared to 100% dose, with lower dose leading to a lower average and wider spread of DSC values. At 0.6mm, the increased bias and wider spread of DSC values from lowering dose were more pronounced. The effects of dose reduction on DSC for CAD-segmented nodules were similar in magnitude to reducing the slice thickness from 1.0mm to 0.6mm. In conclusion, variation of dose and slice thickness can result in very different segmentations because of noise and image quality. However, there exists some stability in segmentation overlap, as even at 1mm, an image with 25% of the lowdose scan still results in segmentations similar to that seen in a full-dose scan.
Lung cancer screening using low dose CT has been shown to reduce lung cancer related mortality and been approved for widespread use in the US. These scans keep radiation doses low while maximizing the detection of suspicious lung lesions. Tube current modulation (TCM) is one technique used to optimize dose, however limited work has been done to assess TCM’s effect on detection tasks. In this work the effect of TCM on detection is investigated throughout the lung utilizing several different model observers (MO). 131 lung nodules were simulated at 1mm intervals in each lung of the XCAT phantom. A Sensation 64 TCM profile was generated for the XCAT phantom and 2500 noise realizations were created using both TCM and a fixed TC. All nodules and noise realizations were reconstructed for a total of 262 (left and right lungs) nodule reconstructions and 10 000 XCAT lung reconstructions. Single-slice Hotelling (HO) and channelized Hotelling (CHO) observers, as well as a multislice CHO were used to assess area-under-the-curve (AUC) as a function of nodule location in both the fixed TC and TCM cases. As expected with fixed TC, nodule detectability was lowest through the shoulders and leveled off below mid-lung; with TCM, detectability was unexpectedly highest through the shoulders, dropping sharply near the mid-lung and then increasing into the abdomen. Trends were the same for all model observers. These results suggest that TCM could be further optimized for detection and that detectability maps present exciting new opportunities for TCM optimization on a patient-specific level.
Lung cancer screening CT is already performed at low dose. There are many techniques to reduce the dose even further, but it is not clear how such techniques will affect nodule detectability. In this work, we used an in-house CAD algorithm to evaluate detectability. 90348 patients and their raw CT data files were drawn from the National Lung Screening Trial (NLST) database. All scans were acquired at ~2 mGy CTDIvol with fixed tube current, 1 mm slice thickness, and B50 reconstruction kernel on a Sensation 64 scanner (Siemens Healthcare). We used the raw CT data to simulate two additional reduced-dose scans for each patient corresponding to 1 mGy (50%) and 0.5 mGy (25%). Radiologists’ findings on the NLST reader forms indicated 65 nodules in the cohort, which we subdivided based on LungRADS criteria. For larger category 4 nodules, median sensitivities were 100% at all three dose levels, and mean sensitivity decreased with dose. For smaller nodules meeting the category 2 or 3 criteria, the dose dependence was less obvious. Overall, mean patient-level sensitivity varied from 38.5% at 100% dose to 40.4% at 50% dose, a difference of only 1.9%. However, the false-positive rate quadrupled from 1 per case at 100% dose to 4 per case at 25% dose. Dose reduction affected lung-nodule detectability differently depending on the LungRADS category, and the false-positive rate was very sensitive at sub-screening dose levels. Thus, care should be taken to adapt CAD for the very challenging noise characteristics of screening.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.