Image compression is increasingly employed in applications such as medical imaging, for reducing data storage requirement, and Internet video transmission, to effectively increase channel bandwidth. Similarly, military applications such as automated target recognition (ATR) often employ compression to achieve storage and communication efficiencies, particularly to enhance the effective bandwidth of communication channels whose throughput suffers, for example, from overhead due to error correction/detection or encryption. In the majority of cases, lossy compression is employed due the resultant low bit rates (high compression ratio). However, lossy compression produces artifacts in decompressed imagery that can confound ATR processes applied to such imagery, thereby reducing the probability of detection (Pd) and possibly increasing the rate or number of false alarms (Rfa or Nfa). In this paper, the authors' previous research in performance measurement of compression transforms is extended to
include (a) benchmarking algorithms and software tools, (b) a suite of error exemplars that are designed to elicit compression
transform behavior in an operationally relevant context, and (c) a posteriori analysis of performance data. The following transforms are applied to a suite of 64 error exemplars: Visual Pattern Image Coding (VPIC [1]), Vector Quantization with a fast codebook search algorithm (VQ [2,3]), JPEG and a preliminary implementation of JPEG 2000 [4,5], and EBLAST [6-8]. Compression ratios range from 2:1 to 200:1, and various noise levels and types are added to the error exemplars to produce a database of 7,680 synthetic test images. Several global and local (e.g., featural) distortion measures are applied to the
decompressed test imagery to provide a basis for rate-distortion and rate-performance analysis as a function of noise and compression transform type.
|