KEYWORDS: Data modeling, Colorimetry, Visualization, Visual process modeling, Spatial frequencies, Contrast sensitivity, Modulation, Calibration, Eye models, RGB color model
Inspired by the ModelFest and ColorFest data sets, a contrast sensitivity function was measured for a wide range
of adapting luminance levels. The measurements were motivated by the need to collect visual performance data
for natural viewing of static images at a broad range of luminance levels, such as can be found in the case of high
dynamic range displays. The detection of sine-gratings with Gaussian envelope was measured for achromatic
color axis (black to white), two chromatic axes (green to red and yellow-green to violet) and two mixed chromatic
and achromatic axes (dark-green to light-pink, and dark yellow to light-blue). The background luminance varied
from 0.02 to 200 cd/m2. The spatial frequency of the gratings varied from 0.125 to 16 cycles per degree. More
than four observers participated in the experiments and they individually determined the detection threshold
for each stimulus using at least 20 trials of the QUEST method. As compared to the popular CSF models, we
observed higher sensitivity drop for higher frequencies and significant differences in sensitivities in the luminance
range between 0.02 and 2 cd/m2. Our measurements for chromatic CSF show a significant drop in sensitivity with
luminance, but little change in the shape of the CSF. The drop of sensitivity at high frequencies is significantly
weaker than reported in other studies and assumed in most chromatic CSF models.
Many visual difference predictors (VDPs) have used basic psychophysical data (such as ModelFest) to calibrate the
algorithm parameters and to validate their performances. However, the basic psychophysical data often do not contain
sufficient number of stimuli and its variations to test more complex components of a VDP. In this paper we calibrate the
Visual Difference Predictor for High Dynamic Range images (HDR-VDP) using radiologists' experimental data for
JPEG2000 compressed CT images which contain complex structures. Then we validate the HDR-VDP in predicting the
presence of perceptible compression artifacts. 240 CT-scan images were encoded and decoded using JPEG2000
compression at four compression ratios (CRs). Five radiologists participated to independently determine if each image
pair (original and compressed images) was indistinguishable or distinguishable. A threshold CR for each image, at which
50% of radiologists would detect compression artifacts, was estimated by fitting a psychometric function. The CT
images compressed at the threshold CRs were used to calibrate the HDR-VDP parameters and to validate its prediction
accuracy. Our results showed that the HDR-VDP calibrated for the CT image data gave much better predictions than the
HDR-VDP calibrated to the basic psychophysical data (ModelFest + contrast masking data for sine gratings).
KEYWORDS: Picture Archiving and Communication System, Computed tomography, 3D image reconstruction, Data storage, 3D image processing, Data archive systems, Scanners, 3D scanning, CT reconstruction, Sensors
Two image datasets (one thick section dataset and another volumetric dataset) were typically reconstructed from each single CT projection data. The volumetric dataset was stored in a mini-PACS with 271-gigabyte online and 680-gigabyte nearline storage and routed to radiologists' workstations, while the thick section dataset was stored in the main PACS. Over a five-month sample period, 278-gigabytes of CT data (8,976 examinations) were stored in the main PACS, and 738-gigabytes of volumetric datasets (6,193 examinations) were stored in the mini-PACS. The volumetric datasets formed 32.8% of total data for all modalities (2.20 terabytes) in the main PACS and mini-PACS combined. At the end of this period, the volumetric datasets of 1,892 and 5,162 examinations were kept online and nearline, respectively. Mini-PACS offers an effective method of archiving every volumetric dataset and delivering it to radiologists.
Summation and axial slab reformation (ASR) of thin-section CT dataset are increasingly used to increase productivity against data explosion and to increase the image quality. We hypothesized that the summation or ASR can substitute primary reconstruction (PR) directly from a raw projection data. PR datasets (5-mm section thickness, 20% overlap) were reconstructed in 150 abdominal studies. Summation and ASR datasets of the same image positions and nominal section thickness were calculated from thin-section reconstruction images (2-mm section thickness, 50% overlap). Median root-mean-square error between PR and summation (9.55: 95% CI: 9.51, 9.59) was significantly greater than that between PR and ASR (7.12: 95% CI: 7.08, 7.17) (p < 0.0001). Three radiologists independently analyzed 2,000 pairs of PR and test images (PR [as control], summation, or ASR) to determine if summation or ASR is distinguished from PR. Multireader-multicase ROC analysis showed that Az value was 0.597 (95% CI: 0.552, 0.642) for the discrimination between PR and summation, and 0.574 (95% CI: 0.529, 0.619) for the discrimination between PR and ASR. The difference between these two Az values were not significant (p = 0.41). Radiologists can distinguish between the PR image and the summation or ASR image in abdominal studies, although this discrimination performance is slightly better than would be expected from random guessing. Image fidelity of ASR is higher than that of summation, if PR is regarded as the reference standard.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.