In the study, we first introduce a novel AI-based system (MOM-ClaSeg) for multiple abnormality/disease detection and diagnostic report generation on PA/AP CXR images, which was recently developed by applying augmented Mask RCNN deep learning and Decision Fusion Networks. We then evaluate performance of MOM-ClaSeg system in assisting radiologists in image interpretation and diagnostic report generation through a multi-reader-multi-case (MRMC) study. A total of 33,439 PA/AP CXR images were retrospectively collected from 15 hospitals, which were divided into an experimental group of 25,840 images and a control group of 7,599 images with and without processed by MOM-ClaSeg system, respectively. In this MRMC study, 6 junior radiologists (5~10yr experience) first read these images and generated initial diagnostic reports with/without viewing MOM-ClaSeg-generated results. Next, the initial reports were reviewed by 2 senior radiologists (>15yr experience) to generate final reports. Additionally, 3 consensus expert radiologists (>25yr experience) reconciled the potential difference between initial and final reports. Comparison results showed that usingMOM-ClaSeg, diagnostic sensitivity of junior radiologists increased significantly by 18.67% (from 70.76% to 89.43%, P<0.001), while specificity decreased by 3.36% (from 99.49% to 96.13%, P<0.001). Average reading/diagnostic time in experimental group with MOM-ClaSeg reduced by 27.07% (P<0.001), with a particularly significant reduction of 66.48% (P<0.001) on abnormal images, indicating that MOM-ClaSeg system has potential for fast lung abnormality/disease triaging. This study demonstrates feasibility of applying the first AI-based system to assist radiologists in image interpretation and diagnostic report generation, which is a promising step toward improved diagnostic performance and productivity in future clinical practice.
To assess a Smart Imagery Framing and Truthing (SIFT) system in automatically labeling and annotating chest X-ray (CXR) images with multiple diseases as an assist to radiologists on multi-disease CXRs. SIFT system was developed by integrating a convolutional neural network based-augmented MaskR-CNN and a multi-layer perceptron neural network. It is trained with images containing 307,415 ROIs representing 69 different abnormalities and 67,071 normal CXRs. SIFT automatically labels ROIs with a specific type of abnormality, annotates fine-grained boundary, gives confidence score, and recommends other possible types of abnormality. An independent set of 178 CXRs containing 272 ROIs depicting five different abnormalities including pulmonary tuberculosis, pulmonary nodule, pneumonia, COVID-19, and fibrogenesis was used to evaluate radiologists’ performance based on three radiologists in a double-blinded study. The radiologist first manually annotated each ROI without SIFT. Two weeks later, the radiologist annotated the same ROIs with SIFT aid to generate final results. Evaluation of consistency, efficiency and accuracy for radiologists with and without SIFT was conducted. After using SIFT, radiologists accept 93% SIFT annotated area, and variation across annotated area reduce by 28.23%. Inter-observer variation improves by 25.27% on averaged IOU. The consensus true positive rate increases by 5.00% (p=0.16), and false positive rate decreases by 27.70% (p<0.001). The radiologist’s time to annotate these cases decreases by 42.30%. Performance in labelling abnormalities statistically remains the same. Independent observer study showed that SIFT is a promising step toward improving the consistency and efficiency of annotation, which is important for improving clinical X-ray diagnostic and monitoring efficiency.
Chest x-ray radiography (CXR) is widely used in screening and detecting lung diseases. However, reading CXR images is often difficult resulting in diagnostic errors and inter-reader variability. To address this clinical challenge, a Multi-task, Optimal-recommendation, and Max-predictive Classification and Segmentation (MOM-ClaSeg) system is developed to detect and delineate different abnormal regions of interest (ROIs) on CXR, make multiple recommendations of abnormalities sorted by the generated probability scores, and automatically generate diagnostic report. MOM-ClaSeg consists of convolutional neural networks to generate a detection, finer-grained segmentation and prediction score for each ROI based on augmented MaskRCNN framework, and multi-layer perception neural networks to fuse results to generate the optimal recommendation for each detected ROI based on decision fusion framework. Total of 310,333 adult CXR containing 67,071 normal and 243,262 abnormal images depicting 307,415 confirmed ROIs of 65 different abnormalities were assembled as to train MOM-ClaSeg. An independent 22,642 CXR was assembled to test MOMClaSeg. Radiologists detected 6,646 ROIs that depict 43 different types of abnormalities on 4,068 CXR images. Comparing with radiologists’ detection results, MOM-ClaSeg system detected 6,009 true-positive ROIs and 6,379 false-positive ROIs, which represents 90.3% sensitivity and 0.28 false-positive ROIs per image. For the eight common diseases, the computed areas under ROC curves ranged from 0.880 to 0.988. Additionally, 70.4% of MOM-ClaSeg system-detected abnormalities along with system-generated diagnostic reports were directly accepted by radiologists. This study presents the first AI-based multi-task prediction system to detect different abnormalities and generate diagnostic reports to assist radiologists accurately and/or efficiently detecting lung diseases.
Alzheimer’s Disease (AD) is a devastating neurodegenerative disease. Recent advances in tau-positron emission tomography (PET) imaging allow quantitating and mapping out the regional distribution of one important hallmark of AD across the brain. There is a need to develop machine learning (ML) algorithms to interrogate the utility of this new imaging modality. While there are some recent studies showing promise of using ML to differentiate AD patients from normal controls (NC) based on tau-PET images, there is limited work to investigate if tau-PET, with the help of ML, can facilitate predicting the risk of converting to AD while an individual is still at the early Mild Cognitive Impairment (MCI) stage. We developed an early AD risk predictor for subjects with MCI based on tau-PET using Machine Learning (ML). Our ML algorithms achieved good accuracy in predicting the risk of conversion to AD for a given MCI subject. Important features contributing to the prediction are consistent with literature reports of tau susceptible regions. This work demonstrated the feasibility of developing an early AD risk predictor for subjects with MCI based on tau-PET and ML.
Multi-modality images usually exist for diagnosis/prognosis of a disease, such as Alzheimer’s Disease (AD), but with different levels of accessibility and accuracy. MRI is used in the standard of care, thus having high accessibility to patients. On the other hand, imaging of pathologic hallmarks of AD such as amyloid-PET and tau-PET has low accessibility due to cost and other practical constraints, even though they are expected to provide higher diagnostic/prognostic accuracy than standard clinical MRI. We proposed Cross-Modality Transfer Learning (CMTL) for accurate diagnosis/prognosis based on standard imaging modality with high accessibility (mod_HA), with a novel training strategy of using not only data of mod_HA but also knowledge transferred from the model based on advanced imaging modality with low accessibility (mod_LA). We applied CMTL to predict conversion of individuals with Mild Cognitive Impairment (MCI) to AD using the Alzheimer’s Disease Neuroimaging Initiative (ADNI) datasets, demonstrating improved performance of the MRI (mod_HA)-based model by leveraging the knowledge transferred from the model based on tau-PET (mod_LA).
Purpose: Given the recent COVID-19 pandemic and its stress on global medical resources, presented here is the development of a machine intelligent method for thoracic computed tomography (CT) to inform management of patients on steroid treatment.
Approach: Transfer learning has demonstrated strong performance when applied to medical imaging, particularly when only limited data are available. A cascaded transfer learning approach extracted quantitative features from thoracic CT sections using a fine-tuned VGG19 network. The extracted slice features were axially pooled to provide a CT-scan-level representation of thoracic characteristics and a support vector machine was trained to distinguish between patients who required steroid administration and those who did not, with performance evaluated through receiver operating characteristic (ROC) curve analysis. Least-squares fitting was used to assess temporal trends using the transfer learning approach, providing a preliminary method for monitoring disease progression.
Results: In the task of identifying patients who should receive steroid treatments, this approach yielded an area under the ROC curve of 0.85+- 0.10 and demonstrated significant separation between patients who received steroids and those who did not. Furthermore, temporal trend analysis of the prediction score matched expected progression during hospitalization for both groups, with separation at early timepoints prior to convergence near the end of the duration of hospitalization.
Conclusions: The proposed cascade deep learning method has strong clinical potential for informing clinical decision-making and monitoring patient treatment.
Alzheimer’s Disease (AD) is the most common cause of dementia and currently has no cure. Treatments targeting early stages of AD such as Mild Cognitive Impairment (MCI) may be most effective to deaccelerate AD, thus attracting increasing attention. However, MCI has substantial heterogeneity in that it can be caused by various underlying conditions, not only AD. To detect MCI due to AD, NIA-AA published updated consensus criteria in 2011, in which the use of multi-modality images was highlighted as one of the most promising methods. It is of great interest to develop a CAD system based on automatic, quantitative analysis of multi-modality images and machine learning algorithms to help physicians more adequately diagnose MCI due to AD. The challenge, however, is that multi-modality images are not universally available for many patients due to cost, access, safety, and lack of consent. We developed a novel Missing Modality Transfer Learning (MMTL) algorithm capable of utilizing whatever imaging modalities are available for an MCI patient to diagnose the patient’s likelihood of MCI due to AD. Furthermore, we integrated MMTL with radiomics steps including image processing, feature extraction, and feature screening, and a post-processing for uncertainty quantification (UQ), and developed a CAD system called “ADMultiImg” to assist clinical diagnosis of MCI due to AD using multi-modality images together with patient demographic and genetic information. Tested on ADNI date, our system can generate a diagnosis with high accuracy even for patients with only partially available image modalities (AUC=0.94), and therefore may have broad clinical utility.
Due to the promotion of lung cancer screening, more Stage I non-small-cell lung cancers (NSCLC) are currently detected, which usually have favorable prognosis. However, a high percentage of the patients have cancer recurrence after surgery, which reduces overall survival rate. To achieve optimal efficacy of treating and managing Stage I NSCLC patients, it is important to develop more accurate and reliable biomarkers or tools to predict cancer prognosis. The purpose of this study is to investigate a new quantitative image analysis method to predict the risk of lung cancer recurrence of Stage I NSCLC patients after the lung cancer surgery using the conventional chest computed tomography (CT) images and compare the prediction result with a popular genetic biomarker namely, protein expression of the excision repair cross-complementing 1 (ERCC1) genes. In this study, we developed and tested a new computer-aided detection (CAD) scheme to segment lung tumors and initially compute 35 tumor-related morphologic and texture features from CT images. By applying a machine learning based feature selection method, we identified a set of 8 effective and non-redundant image features. Using these features we trained a naïve Bayesian network based classifier to predict the risk of cancer recurrence. When applying to a test dataset with 79 Stage I NSCLC cases, the computed areas under ROC curves were 0.77±0.06 and 0.63±0.07 when using the quantitative image based classifier and ERCC1, respectively. The study results demonstrated the feasibility of improving accuracy of predicting cancer prognosis or recurrence risk using a CAD-based quantitative image analysis method.
Since performance and clinical utility of current computer-aided detection (CAD) schemes of detecting and classifying soft tissue lesions (e.g., breast masses and lung nodules) is not satisfactory, many researchers in CAD field call for new CAD research ideas and approaches. The purpose of presenting this opinion paper is to share our vision and stimulate more discussions of how to overcome or compensate the limitation of current lesion-detection based CAD schemes in the CAD research community. Since based on our observation that analyzing global image information plays an important role in radiologists’ decision making, we hypothesized that using the targeted quantitative image features computed from global images could also provide highly discriminatory power, which are supplementary to the lesion-based information. To test our hypothesis, we recently performed a number of independent studies. Based on our published preliminary study results, we demonstrated that global mammographic image features and background parenchymal enhancement of breast MR images carried useful information to (1) predict near-term breast cancer risk based on negative screening mammograms, (2) distinguish between true- and false-positive recalls in mammography screening examinations, and (3) classify between malignant and benign breast MR examinations. The global case-based CAD scheme only warns a risk level of the cases without cueing a large number of false-positive lesions. It can also be applied to guide lesion-based CAD cueing to reduce false-positives but enhance clinically relevant true-positive cueing. However, before such a new CAD approach is clinically acceptable, more work is needed to optimize not only the scheme performance but also how to integrate with lesion-based CAD schemes in the clinical practice.
A novel three stage Semi-Supervised Learning (SSL) approach is proposed for improving performance of computerized breast cancer analysis with undiagnosed data. These three stages include: (1) Instance selection, which is barely used in SSL or computerized cancer analysis systems, (2) Feature selection and (3) Newly designed ‘Divide Co-training’ data labeling method. 379 suspicious early breast cancer area samples from 121 mammograms were used in our research. Our proposed ‘Divide Co-training’ method is able to generate two classifiers through split original diagnosed dataset (labeled data), and label the undiagnosed data (unlabeled data) when they reached an agreement. The highest AUC (Area Under Curve, also called Az value) using labeled data only was 0.832 and it increased to 0.889 when undiagnosed data were included. The results indicate instance selection module could eliminate untypical data or noise data and enhance the following semi-supervised data labeling performance. Based on analyzing different data sizes, it can be observed that the AUC and accuracy go higher with the increase of either diagnosed data or undiagnosed data, and reach the best improvement (ΔAUC = 0.078, ΔAccuracy = 7.6%) with 40 of labeled data and 300 of unlabeled data.
Stage I non-small-cell lung cancers (NSCLC) usually have favorable prognosis. However, high percentage of NSCLC patients have cancer relapse after surgery. Accurately predicting cancer prognosis is important to optimally treat and manage the patients to minimize the risk of cancer relapse. Studies have shown that an excision repair crosscomplementing 1 (ERCC1) gene was a potentially useful genetic biomarker to predict prognosis of NSCLC patients. Meanwhile, studies also found that chronic obstructive pulmonary disease (COPD) was highly associated with lung cancer prognosis. In this study, we investigated and evaluated the correlations between COPD image features and ERCC1 gene expression. A database involving 106 NSCLC patients was used. Each patient had a thoracic CT examination and ERCC1 genetic test. We applied a computer-aided detection scheme to segment and quantify COPD image features. A logistic regression method and a multilayer perceptron network were applied to analyze the correlation between the computed COPD image features and ERCC1 protein expression. A multilayer perceptron network (MPN) was also developed to test performance of using COPD-related image features to predict ERCC1 protein expression. A nine feature based logistic regression analysis showed the average COPD feature values in the low and high ERCC1 protein expression groups are significantly different (p < 0.01). Using a five-fold cross validation method, the MPN yielded an area under ROC curve (AUC = 0.669±0.053) in classifying between the low and high ERCC1 expression cases. The study indicates that CT phenotype features are associated with the genetic tests, which may provide supplementary information to help improve accuracy in assessing prognosis of NSCLC patients.
Routine visual slide screening for identification of tuberculosis (TB) bacilli in stained sputum slides under
microscope system is a tedious labor-intensive task and can miss up to 50% of TB. Based on the Shannon cofactor
expansion on Boolean function for classification, a stepwise classification (SWC) algorithm is
developed to remove different types of false positives, one type at a time, and to increase the detection of TB
bacilli at different concentrations. Both bacilli and non-bacilli objects are first analyzed and classified into
several different categories including scanty positive, high concentration positive, and several non-bacilli
categories: small bright objects, beaded, dim elongated objects, etc. The morphological and contrast features
are extracted based on aprior clinical knowledge. The SWC is composed of several individual classifiers.
Individual classifier to increase the bacilli counts utilizes an adaptive algorithm based on a microbiologist's
statistical heuristic decision process. Individual classifier to reduce false positive is developed through
minimization from a binary decision tree to classify different types of true and false positive based on feature
vectors. Finally, the detection algorithm is was tested on 102 independent confirmed negative and 74 positive
cases. A multi-class task analysis shows high accordance rate for negative, scanty, and high-concentration as
88.24%, 56.00%, and 97.96%, respectively. A binary-class task analysis using a receiver operating
characteristics method with the area under the curve (Az) is also utilized to analyze the performance of this
detection algorithm, showing the superior detection performance on the high-concentration cases (Az=0.913)
and cases mixed with high-concentration and scanty cases (Az=0.878).
Using data from a clinical trial of a commercial CAD system for lung cancer detection we separately analyzed the location, if any, selected on each film by 15 radiologists as they interpreted chest radiographs, 160 of which did not contain cancers. On the cancer-free cases, the radiologists showed statistically significant difference in decisions while using the CAD (p-value 0.002). Average specificity without computer assistance was 78%, and with computer assistance 73%. In a clinical trial with CAD for lung cancer detection there are multiple machine false positives. On chest radiographs of older current or former smokers, there are many scars that can appear like cancer to the interpreting radiologists. We are reporting on the radiologists' false positives and on the effect of machine false positive detections on observer performance on cancer-free cases. The only difference between radiologists occurred when they changed their initial true negative decision to false positive (p-value less than 0.0001), average confidence level increased, on the scale from 0.0 to 100.0, from 16.9 (high confidence of non-cancer) to 53.5 (moderate confidence cancer was present). We are reporting on the consistency of misinterpretation by multiple radiologists when they interpret cancer-free radiographs of smokers in the absence of CAD prompts. When multiple radiologists selected the same false positive location, there was usually a definite abnormality that triggered this response. The CAD identifies areas that are of sufficient concern for cancer that the radiologists will switch from a correct decision of no cancer to mark a false positive, previously overlooked, but suspicious appearing cancer-free area; one that has often been marked by another radiologist without the use of the CAD prompt. This work has implications on what should be accepted as ground truth in ROC studies: One might ask, "What a false positive response means?" when the finding, clinically, looks like cancer-it just isn’t cancer, based on long-term follow-up or histology.
KEYWORDS: Computed tomography, Lung, Image registration, 3D image processing, Image segmentation, Lung cancer, Cancer, 3D vision, Chest, Signal to noise ratio
Several 3-D tools were developed to facilitate the radiologists for the examination of thoracic CT images. The image functions include segmentation of suspected regions, characterization of the nodules, localized 3-D view of the nodules, 3-D transparent view, 3-D image matching and 3-D volume registration. The last two functions are particularly useful for the temporal CT examination which the change of suspected regions must be evaluated. The majority of 3-D functions can be used to form a clinical workstation. As far as temporal image matching is concerned, the volume registration method can be made more accurately than slice matching methods. If an image subtraction function is used, fewer artifacts would be associated with the volume registration CT pair than with the slice matching CT pair.
This paper evaluates the effect of Computer-Aided Detection prompts on the confidence and detection of cancer on chest radiographs. Expected findings included an increase in confidence rating and a decrease in variance in confidence when radiologists interacted with a computer prompt that confirmed their initial decision or induced them to switch from an incorrect to a correct decision. Their confidence rating decreased and the variance of confidence rating increased when the computer failed to confirm a correct or incorrect decision. A population of cases was identified that changed among reading modalities. This unstable group of cases differed between the Independent and Sequential without CAD modalities in cancer detection by radiologists and cancer detection by machine. CAD prompts induced the radiologists to make two types of changes in cases: changes on the sequential modality with CAD that restored an initial diagnosis made in the Independent read and new changes that were not present in the Independent or Sequential reads without CAD. This has implications for double reading of cases. The effects of intra-observer variability and inter-observer variability are suggested as potential causes for differences in statistical significance of the Independent and Sequential Design approaches to ROC studies.
Using data from a clinical trial of a commercial CAD system for lung cancer detection, we are comparing the time used for interpreting chest radiographs between the radiologists showing improvement in detecting lung cancer with computer assistance to those not showing improvement. While measurement showed that the 15 radiologists as a group showed improvement (the Az was 0.8288 in independent reading, and 0.8654 in sequential reading with CAD, improvement has a P-value of 0.0058), there were 9 radiologists who showed improvement and 6 who did not. The behavior of the radiologists differed between the cases that contained cancer and those that were cancer-free. For the cases that contained a cancer, there was no statistically significant difference in time between the two groups (P-value 0.26). For the cancer-free cases, we found a statistically significant greater interpretation time for the radiologists whose performance in cancer detection was better with computer assistance compared to those without improvement (P-value 0.02). This work shows that radiologists who increased their detection of lung cancer using CAD, compared to those who showed no improvement, significantly increased their reading time when they determined that true negative cases for cancer were indeed true negative cases, but did not increase reading time for true positive decision on cancer cases.
We have developed various segmentation and analysis methods for the quantification of lung nodules in thoracic CT. Our methods include the enhancement of lung structures followed by a series of segmentation methods to extract the nodule and to form 3D configuration at an area of interest. The vascular index, aspect ratio, circularity, irregularity, extent, compactness, and convexity were also computed as shape features for quantifying the nodule boundary. The density distribution of the nodule was modeled based on its internal homogeneity and/or heterogeneity. We also used several density related features including entropy, difference entropy as well as other first and second order moments. We have collected 48 cases of lung nodules scanned by thin-slice diagnostic CT. Of these cases, 24 are benign and 24 are malignant. A jackknife experiment was performed using a standard back-propagation neural network as the classifier. The LABROC result showed that the Az of this preliminary study is 0.89.
This paper describes the effect of a computer-aided detection (CAD) system's false positive marks on observer performance when interpreting films containing lung cancer. We compared the location/no location chosen initially by the radiologists and the stability or change in location that followed the provision of the CAD information. We found a difference in radiologists' behavior that depended on whether the radiologists' initial interpretation was a true positive or a false positive detection. When the radiologist made an incorrect initial decision, that decision was less stable than when the initial decision was correct.
In this paper, we look at a different potentially useful method of behavior analysis, a method that may allow one to derive from the ROC confidence ratings of individual radiologists, a behavioral operating point that closely reflects the point where the radiologist would have decided to act or take no action on a case. This behavioral operating point appears appropriate for the calculation of cost benefit relationships and for studying how a radiologist shifts within ROC space when provided with Computer Aided Diagnosis (CADx) information.
Our goal was to perform a pre-clinical test of the performance of a new pre-commercial system for detection of primary early-stage lung cancer on chest radiographs developed by Deus Technologies, LLC. The RapidScreenTM RS 2000 System integrates state of the art technical development in this field.
A multi-resolution unsharp masking (USM) technique is developed for image feature enhancement in digital mammogram images. This technique includes four processing phases: (1) determination of parameters of multi-resolution analysis (MRA) based on the properties of images; (2) multi-resolution decomposition of original images into sub-band images via wavelet transformation with perfect reconstruction filters; (3) modification of sub-band images with adaptive unsharp masking technique; and (4) reconstruction of image from modified sub- band images via inverse wavelet transformation. An adaptive unsharp masking technique is applied to the sub-band images in order to modify the pixel values based on the edge components at various frequency scales. Smoothing and gain factor parameters, employed in the unsharp masking, are determined according to the resolution, frequency, and energy content of the sub-band images. Experimental results show that this technique is able to enhance the contrast of region of interest (microcalcification clusters) in mammogram image.
A multi-stage system with image processing and artificial neural techniques is developed for detection of microcalcification in digital mammogram images. The system consists of (1) preprocessing stage employing box-rim filtering and global thresholding to enhance object-to- background contrast; (2) preliminary selection stage involving body-part identification, morphological erosion, connected component analysis, and suspect region segmentation to select potential microcalcification candidates; and (3) neural network-based pattern classification stage including feature map extraction, pattern recognition neural network processing, and decision-making neural network architecture for accurate determination of true and false positive microcalcification clusters. Microcalcification suspects are captured and stored in 32 by 32 image blocks, after the first two processing stages. A set of radially sampled pixel values is utilized as the feature map to train the neural nets in order to avoid lengthy training time as well as insufficient representation. The first pattern recognition network is trained to recognize true microcalcification and four categories of false positive regions whereas the second decision network is developed to reduce the detection of false positives, hence to increase the detection accuracy. Experimental results show that this system is able to identify true cluster at an accuracy of 93% with 2.9 false positive microcalcifications per image.
Wavelet-based image compression is receiving significant attention because of its potential for good image quality at low bit rates. In this paper, we describe and analyze a lossy wavelet compression scheme that uses direct extensions of the JPEG quantization and Huffman encoding strategies to provide high compression efficiency with reasonable complexity. The focus is on the compression of 12-bit medical images obtained from computed radiography and mammography, but the general methods and conclusions presented in this paper are applicable to a wide range of image types.
Image compression reduces the amount of space necessary to store digital images and allows quick transmission of images to other hospitals, departments, or clinics. However, the degradation of image quality due to compression may not be acceptable to radiologists or it may affect diagnostic results. A preliminary study with small-scale test procedures was conducted using several chest images with common lung diseases and compressed with JPEG and wavelet techniques at various ratios. Twelve board-certified radiologists were recruited to perform two types of experiments. In the first part of the experiment, presence of lung disease on six images was rated by radiologists. Images presented were either uncompressed or compressed at 32:1 or 48:1 compression ratios. In the second part of the experiment, radiologists were asked to make subjective ratings by comparing the image quality of the uncompressed version of an image with the compressed version of the same image, and then judging the acceptability of the compressed image for diagnosis. The second part examined a finer range of compression ratios (8:1, 16:1, 24:1, 32:1, 44:1, and 48:1). In all cases, radiologists were able to make an accurate diagnosis on the given images with little difficulty, but image degradation perceptibility increased as the compression ratio increased. At higher compression ratios, JPEG images were judged to be less acceptable than wavelet-based images, however, radiologists believed that all the images were still acceptable for diagnosis. Results of this study will be used for later comparison with large-scale studies.
Three compression algorithms were compared by using contrast-detail (CD) analysis. Two phantoms were designed to simulate computed tomography (CT) scans of the head. The first was based on CT scans of a plastic cylinder containing water. The second was formed by combining a CT scan of a head with a scan of the water phantom. The soft tissue of the brainwas replaced by a subimage containing only water. The compression algorithms studied were the full-frame discrete cosine (FDCT) algorithm, the Joint Photographic Experts Group (JPEG) algorithm, and a wavelet algorithm. Both the wavelet and JPEG algorithms affected regions of the image near the boundary of the skull. The FDCT algorithm propagated false edges throughout the region interior to the skull. The wavelet algorithm affected the images less than the other compression algorithms. The presence of the skull especially affected observer performance on the FDCT compressed images. All of the findings demonstrated a flattening of the CD curve for large lesions. The results of a compression study using lossy compression algorithms is dependent on the characteristics ofthe image and the nature of the diagnostic task. Because of the high density bone of the skull, head CT images present a much more difficult compression problem than chest x-rays. We found no significant differences among the CD curves for the tested compression algorithms.
Key Words: Image compression, contrast-detail analysis.
Determination of an optimal window to improve the performance for registering nonlinear distorted images using a cross-correlation technique is presented in this paper. A 2D cross-correlation technique is applied to two meteorological radar images (the search and reference images) which possess the characteristics of nonlinearity, geometric distortion, and ever-evolving pattern. Various sizes of concentric square windows of the reference image are used for computing the cross- correlation field. Parameters of cross-correlation field such as peak value, location of the peak, and standard deviation are determined. The location of the peak correlation, instead of its peak value, is chosen as an indicator which best describes the performance of registration, since the location remains unchanged for certain sizes of the window. This location represents the translation shift of the images or the offset of the registration. These windows cover a major portion of the autocorrelation area of the reference image. The standard deviation of the cross-correlation field between search and references images its maximum for these window sizes which are considered as optimal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.