PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE-IS&T Proceedings Volume 6817, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is indispensable for high quality image sensors to have performances of high sensitivity, low noise, high full well capacity and good linear response. The CMOS image sensor with the lateral overflow integration capacitor (LOFIC) has been accomplishing these performances because of its wide dynamic range capability in one exposure. Recently, we have improved the SNR of the LOFIC CMOS image sensor and achieved the number of input-referred noise electrons of 2 e- or below without any column amplifier circuits by increasing the photo-electric conversion gain at the floating diffusion (FD) in pixel as keeping low dark current, good uniformity and high well capacity. It is clear that the relation among the conversion gain, the SNR and the full well capacity decides the optimum design for the FD capacitance and the LOFIC to realize a high quality image sensor. In this paper, the optimum design method of the LOFIC CMOS image sensor for high sensitivity, low noise and high full well capacity is discussed through theoretical analysis and experiments by using the fabricated LOFIC CMOS image sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To evaluate electrical characteristics of the 1T charge-modulation pixel, we propose two design configurations: one is a
2.2μm-pitch, rectangular-gate pixel, and the other is a 1.4μm-pitch, ring-gate pixel. The former allows the transistor size
to be minimized, but requires surrounding STI (Shallow Trench Isolation) to reduce electrical crosstalk. The latter is
advantageous in terms of pixel size and fill factor, mainly thanks to STI suppression. The two design configurations are respectively integrated in test chips. Our measured results confirm the scaling law: reducing pixel size improves conversion gain, but degrades full well capacity (FWC). They also show that dark current of the 1.4μm-pitch ring-gate pixel is much lower than the 2.2μm-pitch rectangular-gate counterpart. This low dark current achievement may be explained by: i) suppression of STI-induced surface leakage current component, ii) smooth-shape layout to minimize band-to-band tunneling effect, and iii) smaller pixel size with smaller depletion areas which has, accordingly, lower thermally-generated dark current components. The 1.4μm-pitch ring-gate pixel also has lower noise, especially much lower dark FPN. This seems to confirm that dark FPN may have a large contribution from dark current generation. The dynamic range for the 1.4μm-pitch pixel is larger, meaning that signal-to-noise ratio outweighs FWC degradation. However, the sensitivity, like FWC, is also degraded in the same proportion. There are possibilities of improvements especially by process optimization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the last decade, the pixels that make up CMOS image sensors have steadily decreased in size. This scaling has two effects: first, the amount of light incident on each pixel decreases, reducing the photodiode signal and making optical efficiency, i.e., the collection of each photon, more important. Second, spatial optical crosstalk increases because diffraction comes into play when pixel size approaches the wavelength of visible light. To counter these two effects, we have investigated and compared three methods for guiding incident light from the microlens down to the photodiode. Two of these techniques rely on total internal reflection (TIR) at the boundary between dielectric media of different refractive indices. The first involves filling the central pixel area with a high-index dielectric material, while in the second approach, material between the pixels is removed and air is used as a low-index cladding. The third method uses reflection at a metal-dielectric interface to confine the light. Simulations were performed using commercial finite-difference time-domain (FDTD) software on a realistic 1.75 μm pixel model for on-axis as well as angled incidence. We evaluate the optical efficiency and spatial crosstalk performance of these methods compared to a reference pixel and examine the influence of several design parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a versatile characterization method we developed at STMicroelectronics for off-axis pixels (i.e. over the image plane) on CMOS image sensor. The solution does not require optics, making it suitable for early design phases as for optimizations and investigations. It is based on a specific design of color filters and microlens masks, which consists in several blocks. Inside each block, the filters and the microlens are shifted by a given amount, relatively to the pixel. Each block is related to a given chief ray and then defines a point in the chief ray angle space. Then, the performances of these angular points can be measured by rotating the sensor, using conventional uniform illumination setup with controlled f-number. Then it is possible to map these data on the image plane, knowing the chief ray angle versus focal plane coordinate function. Finally, we present some characterizations and optimizations based on the fact that the shift is arbitrary defined during circuit layout step, so it is possible to test the sensor with higher chief ray angles than those present in the product, or to optimize the shift of the microlens versus the chief ray angle for a given pixel architecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Bilateral filtering is an effective technique for reducing image noise while preserving edge content. The filter kernel is constructed based on two criteria of neighboring pixels, namely photometric resemblance and geometric proximity. The Euclidean distance is used as a metric for the photometric portion of the kernel in the classic definition of the filter. We illustrate in this paper a simplified method for calculating the Euclidean distance metric which reduces the computational complexity of the filter. Furthermore, we generalize the idea of bilateral filtering by linking the filter processing parameters to the noise profile of a CMOS image sensor and present a simple method for tuning the performance of the filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a digital camera, several factors cause signal-dependency of additive noise. Many denoising methods have been
proposed, but many of them do not necessarily work well for actual signal-dependent noise. To solve the problem of
removing signal-dependent noise of a digital color camera, this paper presents a denoising approach via nonlinear
image-decomposition. As a pre-process, we employ the BV-L1 nonlinear image-decomposition variational model. This
variational model decomposes an input image into three components: a structural component corresponding to a cartoon
image approximation collecting geometrical image features, a texture component corresponding to fine image textures,
and a residual component. Each separated component is denoised with a denoising method suitable to it. For an image
taken with a digital color camera under the condition of high ISO sensitivity, the BV-L1 model removes its signal-dependent
noise to a large extent from its separated structural component, in which geometrical image features are well
preserved, but the structural component sometimes suffers color-smear artifacts. To remove those color-smear artifacts,
we apply the sparse 3D transform-domain collaborative filtering to the separated structural component. On the other
hand, the texture component and the residual component are rather contaminated with noise, and the effects of noise are
selectively removed from them with our proposed color shrinkage denoising schemes utilizing inter-channel color crosscorrelations.
Our method achieves efficient denoising and selectively removes signal-dependent noise of a digital color
camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In modern digital still cameras, noise-reduction is a more and more important issue of signal processing, as the customers demand for higher pixel counts and for increased light sensitivity. In recent years, with pixel counts of ten or more megapixel in a compact camera, the images lack more and more of fine details and appear degraded. The standard test-methods for spatial resolution fail to describe this phenomenon, because due to extensive adaptive image enhancements, the camera cannot be treated as a linear position-invariant-system. In this paper we compare established resolution test methods and present new approaches to describe the influence of noise reduction on images.
A new chart is introduced which consists of nine siemens stars, a multi-modulation set of slanted edges and Gaussian white noise as camera target. Using this set, the standard methods known as SFR-Siemens and SFR-Edge are calculated together with additional information like edge-width and edge-noise. Based on the Gaussian white noise, several parameters are presented as an alternative to describe the spatial resolution on low-contrast content.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a new noise estimation and reduction scheme to restore images degraded by image sensor noise. Since the characteristic of noise deviates according to camera response function (CRF) and the sensitivity of image sensors, we build a noise profile by using test charts for accurate noise estimation. By using the noise profile, we develop simple and fast noise estimation scheme which can be appropriately used for digital cameras. Our noise removal method utilizes the result of the noise estimation and applies several adaptive nonlinear filters to give the best image quality against high ISO noise. Experimental results show that the proposed method yields significantly good performance for images corrupted by both synthetic sensor noise and real sensor noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previously we presented a TV-based super-resolution sharpening-demosaicing method. Our previous method makes it
possible to restore frequency components higher than the Nyquist frequency, and to interpolate color signals effectively
while preserving their sharp color edges, without producing ringing artifacts along the edges. However, since our
previous method applies the TV regularization separately to each primary color channel, as side effects it sometimes
produces false color artifacts and/or zipper artifacts along sharp color edges. To remedy the drawback, in addition to the
TV regularization of each primary color signal, we introduce the TV regularization of color difference signals such as
G-R, and that of color sum signals such as G+R, into the TV-based super-resolution sharpening-demosaicing method.
Near sharp color edges, correct interpolation provides the smallest TV norms of color difference signals or the smallest
TV norms of color sum signals. Unlike our previous method, our new method jointly interpolates the three primary
color channels. We compare demosaicing performance of our new method with the state-of-the-art demosaicing
methods. In the evaluations, we consider a noise-free case and a noisy case. For both cases our new method achieves the
best performance, and for the noisy case our new method considerably outperforms the state-of-the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the new method for fast auto focusing in image capturing devices. This is achieved by using two defocused images. At two prefixed lens positions, two defocused images are taken and defocused blur levels in each image are estimated using Discrete Cosine Transform (DCT). These DCT values can be classified into distance from the image capturing device to main object, so we can make distance vs. defocused blur level classifier. With this classifier, relation between two defocused blur levels can give the device the best focused lens step. In the case of ordinary auto focusing like Depth from Focus (DFF), it needs several defocused images and compares high frequency components in each image. Also known as hill-climbing method, the process requires about half number of images in all focus lens steps for focusing in general. Since this new method requires only two defocused images, it can save lots of time for focusing or reduce shutter lag time. Compared to existing Depth from Defocus (DFD) which uses two defocused images, this new algorithm is simple and accurate as DFF method. Because of this simplicity and accuracy, this method can also be applied to fast 3D depth map construction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This new color constancy method is based on the polarization degree of that light which is reflected at the surface of an object. The subtraction of at least two images taken under different polarization directions detects the polarization degree of the neutrally reflected portions and eliminates the remitted non-polarized colored portions.
Two experiments have been designed to clarify the performance of the procedure, one to multicolored objects and another to objects of different surface characteristics. The results show that the mechanism of eliminating the remitted, non-polarized colored portions of light works very fine. Independent from its color, different color pigments seem to be suitable for measuring the color of the illumination.
The intensity and also the polarization degree of the reflected light depend on the surface properties significantly. The results exhibit a high accuracy of measuring the color of the illumination for glossy and matt surfaces. Only strongly scattering surfaces account for a weak signal level of the difference image and a reduced accuracy.
An embodiment is proposed to integrate the new method into digital cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most cell-phone cameras today use CMOS sensors with higher and higher pixel counts, which in turn, results in smaller pixel sizes. To achieve good performance in current technologies, pixel structures are fairy complicated. Increasing complexity in pixel structure, coupled with optical constraints specific to cell-phone cameras, results in non-uniform light response over the pixel array. A cell-phone camera sensor module typically has a light-falloff of -40% from center relative to an edge. This high fall-off usually has non-radial spatial distribution making lens fall-off corrections
complicated. The standard method of reducing light fall-off is linear (i.e. multiplicative gain), resulting in close to a ~2x peripheral gain and a corrected image with lower dynamic range. To address this issue, a novel idea is explored where the fall-off is used to increase the dynamic range of the captured image. As a typical lens fall-off needs a gain of up to 2x centre vs edge, the fall-off can be thought of as a 2D neutral density filter which allows up to 2x more light to be sensed towards the periphery of the sensor. The proposed solution uses a 2D scaled down gain map to correct the fall-off. For each pixel, using the gain map, an inflection point is calculated which is used to estimate the associated pixel transfer characteristic which is linear up to the inflection point and then becomes logarithmic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article explains the cause of the color fringing phenomenon that can be noticed in photographs, particularly on the edges of backlit objects. The nature of color fringing is optical, and particularly related to the difference of blur spots at different wavelengths. Therefore color fringing can be observed both in digital and silver halide photography. The hypothesis that lateral chromatic aberration is the only cause of color fringing is discarded. The factors that can influence the intensity of color fringing are carefully studied, some of them being specific to digital photography. A protocol to measure color fringing with a very good repeatability is described, as well as a mean to predict color fringing from optical designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In optical imaging systems, specifically digital cameras, part of the incoming light flux is misdirected to undesired locations due to scattering, undesired reflections, diffraction and lens aberrations. The portion due mainly to scattering and undesired reflections is called stray light. Stray light reduces contrast and causes color inaccuracy in images. The point spread function (PSF) model for stray light is shift variant and has been studied by Jansson et al. (1998) and Bitlis et al. (2007). In this paper, we keep the model's shift variant nature and improve it by first normalizing it and then incorporating the shading effect inherent in the optical system. We then develop an efficient method to estimate the model parameters by using a locally shift invariant approximation. Finally, we
reduce the stray light by deconvolution. We conducted extensive experiments with two camera models. Results from these experiments show the reduction of stray light and thus the improvement of image quality and fidelity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The study investigates the lossy compression of DSC raw data based upon the 12 bit baseline JPEG compression.
Computational simulations disclose that JPEG artefacts originate from the quantization of the DCT coefficients. Input noise is shown to serve as an appropriate means to avoid these artefacts. Stimulated by such a noise, the JPEG encoder simply acts as an high frequency noise generator.
The processing structure of a general compression model is introduced. The four color planes of an image sensor are separately compressed by a 12 bit baseline JPEG encoder. One-dimensional look-up-tables allow for an optimized adaptation of the JPEG encoder to the noise characteristics of the input signals. An idealized camera model is presumed to be dominated by photon noise. Its noise characteristics can optimally be matched to the JPEG encoder by a common gamma function.
The gamma adapted compression model is applied to an exemplary set of six raw images. Its performance concerning the compression ratio and compression noise is examined.
Optimally adjusted to the input noise, the compression procedure offers excellent image quality without any perceived loss referring to sharpness or noise. The results show that this method is capable to achieve compression ratios of about factor 4 in practice. The PSNR reaches about 60 dB over the complete signal range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a complete digital camera workflow to capture and render
high dynamic range (HDR) static scenes, from RAW sensor data to an
output-referred encoded image. In traditional digital camera
processing, demosaicing is one of the first operations done after
scene analysis. It is followed by rendering operations, such as
color correction and tone mapping. In our workflow, which is based
on a model of retinal processing, most of the rendering steps are
performed before demosaicing. This reduces the complexity of the
computation, as only one third of the pixels are processed. This is
especially important as our tone mapping operator applies local and
global tone corrections, which is usually needed to well render high
dynamic scenes. Our algorithms efficiently process HDR images with
different keys and different content.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose an efficient compression method for sampled color images obtained using a single sensor with a color filter array. The proposed method directly compresses the output of the single image sensor without first interpolating missing colors of sampled color images. Since the amount of the original image data of the single image sensor is smaller than that of the full color image data, the size of compressed data can be smaller and encoding can be more efficient. Experimental results show that the proposed method provides improved performance compared to existing method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The analysis of images has always been an important aspect in the quality enhancement of photographs and photographic equipment. Due to the lack of meta data it was mostly limited to images taken by experts under predefined conditions and the analysis was also done by experts or required psychophysical tests. With digital photography and the EXIF1 meta data stored in the images, a lot of information can be gained from a semiautomatic or automatic image analysis if one has access to a large number of images. Although home printing is becoming more and more popular, the European market still has a few photofinishing companies who have access to a large number of images. All printed images are stored for a certain period of time adding up to several million images on servers every day. We have utilized the images to answer numerous questions and think that these answers are useful for increasing image quality by optimizing the image processing algorithms. Test methods can be modified to fit typical user conditions and future developments can be pointed towards ideal directions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mobile imagers now possess multi-megapixel sensors. Blur caused by camera motion during the exposure is becoming
more pronounced because the exposure time for the smaller pixel sizes has been increased to attain the same photon
statistics.
We present a method of measuring human hand-eye coordination for mobile imagers. When trying to hold a steady
position, the results indicate that there is a distinct linear-walk motion and a distinct random-walk motion while no
panning motion is intended. By using the video capture mode, we find that the frame to frame variation is typically less
than 2.5 pixels (0.149 degrees). An algorithm has been devised which permits the camera to determine in real-time
when is the optimum moment to for the exposure to begin to best minimize motion blur.
We also observed the edge differences in fully populated "direct" image sensors and Bayer pattern sensors. Because
dominant horizontal and vertical linear motions are present, chromatic shifts are observed in the Bayer sensor in the
direction of motion for certain color transitions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Simulation of the imaging pipeline is an important tool for the design and evaluation of imaging systems. One of
the most important requirements for an accurate simulation tool is the availability of high quality source scenes.
The dynamic range of images depends on multiple elements in the imaging pipeline including the sensor, digital
signal processor, display device, etc. High dynamic range (HDR) scene spectral information is critical for an
accurate analysis of the effect of these elements on the dynamic range of the displayed image. Also, typical digital
imaging sensors are sensitive well beyond the visible range of wavelengths. Spectral information with support
across the sensitivity range of the imaging sensor is required for the analysis and design of imaging pipeline
elements that are affected by IR energy. Although HDR scene data information with visible and infrared content
are available from remote sensing resources, there are scarcity of such imagery representing more conventional
everyday scenes. In this paper, we address both these issues and present a method to generate a database of
HDR images that represent radiance fields in the visible and near-IR range of the spectrum. The proposed
method only uses conventional consumer-grade equipment and is very cost-effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many situations it is desirable to obtain an image that visually describes measured lens MTF data. Since the
sharpness of a camera lens changes continuously across the field of view, the characteristics of the lens need to
be determined at many positions within the image. In short, the proposed simulation method consists of two
parts. First, the point-spread function (PSF) at a limited number of field positions is constructed using Zernike
polynomials. The polynomial coefficients at a specified field position are determined by fitting the calculated
MTF for these PSFs to the measured MTF data. The other part interpolates Zernike coefficients for all other
relevant positions within the image. In this way it is possible to find a sufficiently accurate PSF at any arbitrary
field point. By utilizing a generalized non-translational invariant summation of PSFs, the sharpness at any field
point in the image can be simulated. This system also has the advantage that the sharpness at different focusing
positions can be determined quite easily. It is also a fairly simple matter to include effects such as distortion and
vignetting. In the present paper, examples of simulations are shown and advantages as well as drawbacks of the
method are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A filter optimization was investigated to design digital camera color filters that achieved high color accuracy and low image noise when accounting for a sensor's inherent photon shot noise. In the computer simulation, Gaussiantype spectral-sensitivity curves along with an IR blocking filter were used. When only color reproduction was considered, the best peak wavelengths for RGB channels were 600, 550 and 450nm, respectively, but when both color reproduction and photon shot noise were considered, the peak wavelength of the R channel should be longer (620 - 630nm). Increasing the wavelength reduced noise fluctuation along the a* axis, the most prominent noise component in the former case; however, color accuracy was reduced. The tradeoff between image noise and color accuracy due to the peak wavelength of the R channel led to a four-channel camera consisting of two R sensors and G and B. One of the two R channels was selected according to the difference in levels in order to reduce noise while maintaining accurate color reproduction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A general trend in the CMOS image sensor market is for increasing resolution (by having a larger number of pixels)
while keeping a small form factor by shrinking photosite size. This article discusses the impact of this trend on some of
the main attributes of image quality. The first example is image sharpness. A smaller pitch theoretically allows a larger
limiting resolution which is derived from the Modulation Transfer Function (MTF). But recent sensor technologies
(1.75μm, and soon 1.45μm) with typical aperture f/2.8 are clearly reaching the size of the diffraction blur spot. A second
example is the impact on pixel light sensitivity and image sensor noise. For photonic noise, the Signal-to-Noise-Ratio
(SNR) is typically a decreasing function of the resolution. To evaluate whether shrinking pixel size could be beneficial to
the image quality, the tradeoff between spatial resolution and light sensitivity is examined by comparing the image information capacity of sensors with varying pixel size. A theoretical analysis that takes into consideration measured and predictive models of pixel performance degradation and improvement associated with CMOS imager technology scaling, is presented. This analysis is completed by a benchmarking of recent commercial sensors with different pixel
technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a method for simulating the output of an image sensor to a broad array of test targets. The method uses a modest set of sensor calibration measurements to define the sensor parameters; these parameters are used by an integrated suite of Matlab software routines that simulate the sensor and create output images. We compare the simulations of specific targets to measured data for several different imaging sensors with very different imaging properties. The simulation captures the essential features of the images created by these different sensors. Finally, we show that by specifying the sensor properties the simulations can predict sensor performance to natural scenes that are difficult to measure with a laboratory apparatus, such as natural scenes with high dynamic range or low light levels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a space variant image restoration method where the each different local regions of a given image are de-blurred by each different estimated de-convolution filter locally. The depth of each local blocks are estimated roughly on the optical module representing different indices of refraction for different wavelengths of light. Following the depth, each different region of an image is restored based on the sharpest channel among 3 channels (Red, Green, Blue). Then, in order to prevent discontinuities between the differently restored image regions, we use the piecewise linear interpolation on overlapping regions. Also, practically, this method is applied to 3Mega camera module in order to confirm the effect of proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This article proposes new measurements for evaluating the image quality of a camera, particularly on the reproduction of colors. The concept of gamut is usually a topic of interest, but it is much more adapted to output devices than to capture devices (sensors). Moreover, it does not take other important characteristics of the camera into account, such as noise. On the contrary, color sensitivity is a global measurement relating the raw noise with the spectral sensitivities of the sensor. It provides an easy ranking of cameras. To have an in depth analysis of noise vs. color rendering, a concept of Gamut SNR is introduced, describing the set of colors achievable for a given SNR (Signal to Noise Ratio). This representation provides a convenient visualization of what part of the gamut is most affected by noise and can be useful for camera tuning as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The automatic exposure control (AEC) for a camera phone is typically a simple function of the brightness of the image.
This brightness, or intensity, value generated from a frame is compared to a predefined target. If the intensity value is
less than a specified target, the exposure is increased. If the value is greater, exposure will be decreased.
Is using an intensity target statistic a good model for AEC? In order to answer this question, we conducted
psychophysical experiments to understand subjective preferences. We used a high-end DSLR to take 64 different
outdoor and indoor scenes. Each scene was captured using five different exposure values (EV), from EV-1 to EV+1 with
half EV increments. Subjects were shown the five exposures for each scene and asked to rank them based on their
preferences.
The collected data were analyzed along different dimensions: preferences as a function of the subjects, EV levels, image
quality scores, and the images themselves. Our data analysis concludes that a dynamic intensity target is needed to match
the exposure preferences collected from our subjects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents an undertaking to develop most compact high dynamic range image compression format and shows that chromatic color coordinate system plays a central role in such development. Important design considerations, such as conditions and criterions of data accuracy, efficiency and characteristics of color space, are addressed along the way. An additional trade-off between data precision and data size is discussed, and new feature of parameterized precision is introduced. Detailed comparison of Bef, Luv, Yxy chromatic coordinates is performed and special case of color space singularities is analyzed. LinLogBef imaging format implementation is presented and compared against OpenEXR and Radiance HDR formats by compression ratio, relative error and dynamic range characteristics. Other benefits provided by LinLogBef are further discussed, such as the format's convenience for image editing operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since the standardization of the JPEG 2000, it has found its way into many different applications such as DICOM (digital imaging and communication in medicine), satellite photography, military surveillance, digital cinema initiative, professional video cameras, and so on. The unified framework of the JPEG 2000 architecture makes practical high quality real-time compression possible even in video mode, i.e. motion JPEG 2000. In this paper, we present a study of the compression impact using dynamic code block size instead of fixed code block size as specified in the JPEG 2000 standard. The simulation results show that there is no significant impact on compression if dynamic code block sizes are used. In this study, we also unveil the advantages of using dynamic code block sizes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cell-phone cameras generally use mini lenses that are wide-angle and fixed-focal length (4-6 mm) with a fixed aperture (usually f/2.8). As a result, these mini lenses have very short hyper-focal lengths (e.g., the estimated hyper-focal length for a 3.1-MP cell-phone camera module with a 5.6-mm mini lens is only about 109 cm which covers focused-object distances from about 55 cm to infinity). This combination of optical characteristics can be used effectively to achieve: (a) a faster process for auto-focusing based on a small number of pre-defined non-uniform lens-position intervals; and (b) a depth map generation (coarse or fine depending on the number of focus regions of interest--ROIs) which can be used for different image capture/processing operations such as flash/no-flash decision-making. The above two processes were implemented, tested and validated under different lighting conditions and scene contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.