In a digital imaging system, the Image Signal Processing (ISP) pipeline may be called on to identify and hide defective
pixels in the image sensor. Often filters are designed and implemented to accomplish these tasks without considering the
cost in memory or the effect on actual images. We have created a simulation system which uses an inverse ISP model to
add defect pixels to raw sensor data. The simulation includes lens blur, inverse gamma, additive white noise, and
mosaic. Defect pixels are added to the simulated raw image, which is then processed by various defect pixel correction
algorithms. The end result is compared against the original simulated raw data to measure the effect of the added defects
and defect pixel correction. We have implemented a bounding min-max filter as our defect pixel correction algorithm.
The simulations show that the choice of kernel size and other parameters depends not only on memory constraints, but
also on the defect pixel rate. At high defect pixel rates, algorithms with more aggressive defect correction are more
effective, but also result in higher accidental degradation.
Although its lens and image sensor fundamentally limit a digital still camera's imaging performance, image processing
can significantly improve the perceived quality of the output images. A well-designed processing pipeline achieves a
good balance between the available processing power and the image yield (the fraction of images that meet a minimum
quality criterion).
This paper describes the use of subjective and objective measurements to establish a methodology for evaluating the
image quality of processing pipelines. The test suite contains images both of analytical test targets for objective
measurements, and of scenes for subjective evaluations that cover the photospace for the intended application.
Objective image quality metrics correlating with perceived sharpness, noise, and color reproduction were used to
evaluate the analytical images. An image quality model estimated the loss in image quality for each metric, and the
individual metrics were combined to estimate the overall image quality. The model was trained with the subjective
image quality data.
The test images were processed through different pipelines, and the overall objective and subjective data was assessed
to identify those image quality metrics that exhibit significant correlation with the perception of image quality. This
methodology offers designers guidelines for effectively optimizing image quality.
The increase in the popularity of digital cameras over the past few years has provided motivation to improve all elements of the digital photography signal chain. As a contribution towards this common goal, we present a new CFA recovery algorithm, which recovers full-color images from single-detector digital color cameras more accurately than previously published techniques. This CFA recovery algorithm uses a threshold-based variable number of gradients. In order to recover missing color information at each pixel, we measure the gradient in eight directions based on a 5 X 5 neighborhood surrounding that pixel. Each gradient value is defined as a linear combination of the absolute differences of the like-colored pixels in this neighborhood. We then consider the entire set of eight gradients to determine a threshold of acceptable gradients. For all of the gradients that pass the threshold test, we use color components from the corresponding areas of the 5 X 5 neighborhoods to determine the missing color values. We test our CFA recovery algorithm against bilinear interpolation and a single- gradient method. Using a set of standard test images, we show that our CFA recovery algorithm reduces the MSE by over 50 percent compared to conventional color recovery algorithms. In addition, the resolution test we developed also show that the new CFA recovery algorithm increases the resolution by over 15 percent. The subjective qualities of test images recovered using the new algorithm also show noticeable improvement.
KEYWORDS: Video, Video compression, Neodymium, Data storage, Visualization, 3D video compression, Televisions, Data storage servers, Switches, Failure analysis
In this paper, we consider the placement of scalable video data on single and multiple disks for storage and real-time retrieval. For the single disk case, we extend the principle of constant frame grouping form CBR to VBR scalable video data. When the number of admitted users exceeds the server capacity, the rate of data sent to each user is reduced to relieve the disk system overload, offering a graceful degradation in comparison with nonscalable data. We examine the qualities of video reconstructions obtained from a real disk video server and find the scalable video more visually appealing. For the multiple disk scenario, we prove that periodic interlacing results in lower system delay than striping in a video server using round-robin scheduling. We verify the results through detailed simulation of a four- disk array.
KEYWORDS: Video, Control systems, Computer simulations, Rubidium, Data modeling, Device simulation, Data storage, Statistical analysis, Information operations, Convolution
In this paper we compare techniques for storage and real-time retrieval of Variable Bit Rate (VBR) video data for multiple simultaneous users. The motivation for considering VBR is that video results in inherently time varying data, and as such, with the same average bit rate, higher quality can be achieved with VBR than with Constant Bit Rate (CBR). We propose and compare the following three classes of VBR data placement and retrieval techniques: Constant Time Length (CTL) places and retrieves data in blocks corresponding to equal playback durations, Constant Data Length (CDL) places and retrieves constant-sized data blocks, and a hybrid solution uses CDL placement but retrieves a variable number of blocks in each service round. We have found that CTL data placement has much lower buffer requirements than CDL but suffers from fragmentation during video editing. We show hybrid placement to have both advantages of high efficiency and low fragmentation. We also address the issue of admission control policies by comparing statistical and deterministic techniques. `Statistical' admission control uses statistics of the stored data to ensure that the probability of `overload' does not exceed a prespecified threshold. `Deterministic' control uses the actual stored video bit traces to regulate the number of admitted users. We consider two types of deterministic admission control: data-limit and ideal deterministic. Data-limit admission control admits users based on precomputing the total amount of data requested by all users in future service rounds. In contrast, ideal deterministic admission control not only precomputes the total amount of data requested, but also assumes we have control of data placement at the disk sector level in order to precompute the future seek and rotation times. We provide a cost/benefit analysis of the above placement/retrieval/admission control techniques and conclude that CTL and hybrid placement/retrieval techniques can reduce the total system cost by up to a factor of 3 in comparison with the strategy of padding the VBR video trace to achieve a constant data rate. For read-only systems, CTL has the lowest cost per user. For writable systems, the hybrid technique achieves a good compromise between low cost and low fragmentation. We find that all forms of deterministic admission control can outperform statistical, but the greatest gain comes from using ideal deterministic admission control. We note, however, that this admission control may be difficult to implement on standard disk controllers. Finally, we have implemented a full disk model simulator that operates 1000 times faster than the real-time disk. Results using the simulator are very close to those measured on the real disk, making the simulator useful for future experiments.
KEYWORDS: Video, Neodymium, Video compression, Data modeling, Scalable video coding, Data storage, Performance modeling, Switching, Televisions, Scattering
We present three strategies for placement of video data on parallel disk arrays. Using a low- level disk model and video data from a scalable subband coding technique, we derive constraints with which to compare the three strategies. One strategy, constant frame grouping, is shown to be superior. Two methods for interleaving multiple videos under the constant frame grouping strategy are presented: nonperiodic and periodic. Periodic interleaving is shown to have the advantages of a lower access time and limited scan and pause functions. The constant frame grouping strategy is tested on an actual array of 8 disks and shown to have performance that is close to the theoretical prediction. The scalable nature of the compressed data is used to relieve the disk system overload for an overly high request rate.
This paper surveys a variety of knowledge representation schemes and robot design architectures, including Hierarchical, Feedback-Based, Object Oriented, Process Oriented, Rule-Based and Subsumption. Robot hardware and software issues are discussed using several examples, in particular the 'Go-Fetch' robot is analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.