Shot noise is fundamental to photon detection. In image sensors there is an opportunity for incorporating lateral processing for reduction of both shot noise and thermal noise. Based on the Bayesian argument, we derive a noise-smoothing model that suppresses noise while preserving image discontinuities due to scene structure. Further, we show a possible focal plane solver of this model using a compact electronic network. Simulated experimental results are presented and similarities with human vision are discussed.
KEYWORDS: Reflectivity, Visual process modeling, Lithium, Control systems, Mathematical modeling, Human vision and color perception, Image filtering, Linear filtering, Image segmentation, Modulation
Human vision routinely compensates for illumination field and is mostly sensitive to scene reflectance. This paper presents a biologically inspired mathematical model that estimates the illumination field of a scene and compensates for it to produce the output image that is mostly modulated by the scene reflectance. Since the illumination field is responsible for wide dynamic range variations in scenes, the present model is seen as an approach to handling wide dynamic range scenes. The model can be conveniently implemented in an analog silicon retina incorporating modified cellular neural network for the computation of the illumination field. We present several numerically obtained results on scenes with widely varying illumination conditions.
Computation in artificial perceptual systems assumes that appropriate and reliable sensory information about the environment is available. However, today's sensors cannot guarantee optimal information at all times. For example, when an image from a CCD camera saturates, the entire vision system fails regardless of how 'algorithmically' sophisticated it is. The principal goal of sensory computing is to extract useful information about the environment from 'imperfect' sensors. This paper attempts to generalize our experience with smart vision sensors and provide a direction and illustration for exploiting complex spatio-temporal interaction of image formation, signal detectors, and on-chip processing to extract a surprising amount of useful information from on-chip systems. The examples presented include: VLSI sensory computing systems for adaptive imaging, ultra fast feature tracking with attention, and ultra fast range imaging. Using these examples, we illustrate how sensory computing can extract unique, rich and otherwise not obtainable sensory information when an appropriate balance is maintained between sensing modality, algorithms and available technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.