In the past century, every component of an optical system has become lighter and smaller, except the lenses. Typical lenses have too few degrees of freedom—just the refractive index, and the front and back surface shapes—to meet the demands of the vast array of modern optical systems which collect, project, or otherwise manipulate light. (Even in imaging systems, where computational power has the potential to eliminate the tight coupling between lenses and performance, more capable lenses would increase the trade space that optical designers have available to them).
We introduce the linear system transfer matrix H as a means to interpret classical optical sensing, such as
imaging and spectroscopy, and new computational sensing. This representation allows us to identify PSF
engineering and multiplexed measurements as duals of each other. We also consider several new computational
sensing systems and identify the corresponding system matrix.
The MONTAGE program sponsored by the Microsystems Technology Office of the Defense Advanced Research
Projects Agency (DARPA) resulted in the demonstration of a novel approach to designing compact imaging systems.
This approach was enabled by an unusual four-fold annular lens originally designed and demonstrated for operation
exclusively in the visible spectral band. To accomplish DARPA's goal of an ultra-thin imaging system, the folded optic
was fabricated by diamond-turning concentric aspheric annular zones on both sides of a CaF2 core. The optical
properties of the core material ultimately limit the operating bandwidth of such a design. We present the latest results of
an effort to re-engineer and demonstrate the MONTAGE folded optics for imaging across a broad spectral band. The
broadband capability is achieved by taking advantage of a new design that substitutes a hollow core configuration for the solid core. Along with enabling additional applications for the folded optics, the hollow-core design offers the potential of reducing weight and cost in comparison to an alternative solid-core design. We present new results characterizing the performance of a lens based on the new design and applied to long-wave infrared imaging.
Light field cameras can simultaneously capture the spatial location and angular direction of light rays emanating from a
scene. By placing a variable bandpass filter in the aperture of a light field camera, we demonstrate the ability to
multiplex the visible spectrum over this captured angular dimension. The result is a novel design for a single-snapshot
multispectral imager, with digitally reconstructed images exhibiting reduced spatial resolution proportional to the
number of captured spectral channels. This paper explores the effect of this spatial-spectral resolution tradeoff on
camera design. It also examines the concept of utilizing a non-uniform pinhole array to achieve varying spectral and
spatial capture over the extent of the sensor. Images are presented from several different light field - variable bandpass
filter designs, and limitations and sources of error are discussed.
Many applications require the ability to image a scene in several different narrow spectral bands simultaneously. Conventional multi-layer dielectric filters require control of film thickness to change the resonant wavelength. This makes it difficult to fabricate a mosaic of multiple narrow spectral band transmission filters monolithically. We adjusted the spectral transmission of a multi-layer dielectric filter by drilling a periodic array of subwavelength holes through the stack. Multi-band photonic crystal filters were modeled and optimized for a specific case of filtering six optical bands on a single substrate. Numerical simulations showed that there exists a particular air hole periodicity which maximizes the minimum hole diameter. Specifically for a stack of SiO2 and Si3N4 with the set of filtered wavelengths (nm): 560, 576, 600, 630, 650, and 660, the optimal hole periodicity was 282 nm. This resulted in a minimum hole diameter of 90 nm and a maximum diameter of 226 nm. Realistic fabrication tolerances were considered such as dielectric layer thickness and refractive index fluctuations, as well as vertical air hole taper. Our results provide a reproducible methodology for similar multi-band monolithic filters in either the optical or infrared regimes.
Many applications require the ability to image a scene in several different narrow spectral bands simultaneously.
Absorption filters commonly used to generate RGB color filters do not have the flexibility and narrow band filtering
ability. Conventional multi-layer dielectric filters require control of film thickness to change the resonant wavelength.
This makes it difficult to fabricate a mosaic of multiple narrow spectral band transmission filters monolithically. This
paper extends the previous work in adjusting spectral transmission of a multi-layer dielectric filter by drilling a periodic
array of subwavelength holes through the stack. Multi-band photonic crystal filters were modeled and optimized for a
specific case of filtering six optical bands on a single substrate. Numerical simulations showed that there exists a
particular air hole periodicity which maximizes the minimum hole diameter. Specifically for a stack of SiO2 and Si3N4 with the set of filtered wavelengths (nm): 560, 576, 600, 630, 650, and 660, the optimal hole periodicity was 282 nm. This resulted in a minimum hole diameter of 90 nm and a maximum diameter of 226 nm. Realistic fabrication tolerances
were considered such as dielectric layer thickness and refractive index fluctuations, as well as vertical air hole taper. It
was found that individual layer fluctuations have a minor impact on filter performance, whereas hole taper produces a
large peak shift. The results in this paper provide a reproducible methodology for similar multi-band monolithic filters in
either the optical or infrared regimes.
We describe an approach to polarimetric imaging based on a unique folded imaging system with an annular aperture.
The novelty of this approach lies in the system's collection architecture, which segments the pupil plane to measure the
individual polarimetric components contributing to the Stokes vectors. Conventional approaches rely on time sequential
measurements (time-multiplexed) using a conventional imaging architecture with a reconfigurable polarization filter, or
measurements that segment the focal plane array (spatial multiplexing) by super-imposing an array of polarizers. Our
approach achieves spatial multiplexing within the aperture in a compact, lightweight design. The aperture can be
configured for sequential collection of the four polarization components required for Stokes vector calculation or in any
linear combination of those components on a common focal plane array. Errors in calculating the degree of polarization
caused by the manner in which the aperture is partitioned are analyzed, and approaches for reducing that error are
investigated. It is shown that reconstructing individual polarization filtered images prior to calculating the Stokes
parameters can reduce the error significantly.
An investigation of power and resolution for laser ranging sensors is performed in relation to sense and avoid
requirements of miniature unmanned aircraft systems (UAS). Laser rangefinders can be useful if not essential
complements to video or other sensing modalities in a sense and avoid sensor suite, particularly when applied to
miniature UAS. However, previous studies addressing sensor performance requirements for sense and avoid on UAS
have largely concentrated, either explicitly or implicitly, on passive imaging techniques. These requirements are then
commonly provided in terms of an angular resolution defined by a detection threshold. By means of a simple geometric
model, it is assumed that an imaging system cannot distinguish an object that subtends less than a minimum number of
detector pixels. But for sensors based on active ranging, such as laser rangefinders and LADAR, detection probability is
coupled to the optical power of the laser transmitter. This coupling enables the sensors to achieve sub-pixel resolution,
or resolution better than the instantaneous field-of-view, and to compensate for insufficient angular resolution by
increasing transmitter power. Consequently, when considering sense and avoid detection requirements for laser
rangefinders or LADAR, a tradeoff emerges between resolution and power, which, owing to the inherent relationship of
size and weight to system resolution, translates to a tradeoff between resolution and sensor size, weight, and power. In
this presentation, we investigate the existence of an optimum compromise between sensor resolution and power,
concentrating on platforms with particularly challenging payload limitations.
Computational imaging systems are characterized by a joint design and optimization of front end optics, focal plane
arrays and post-detection processing. Each constituent technology is characterized by its unique scaling laws. In this
paper we will attempt a synthesis of the behavior of individual components and develop scaling analysis of the jointly
designed and optimized imaging systems.
Very simple visual aids can be designed to convey sophisticated concepts in optics to students ranging from 5th grade to first year graduate students. In this talk I will outline several specific classroom experiments illustrating concepts in wave optics that can be performed with computer generated holograms.
The DARPA-funded Consortium for Optical and Optoelectronic Technologies for Computing (CO-OP) recently completed the first DOE Foundry run delivering ten samples to each of nineteen users, each with a unique design. The binary optics process was used to provide a maximum of eight phase levels at a design wavelength of 850 nm. Averaged over all users and all samples, an etch depth error of one percent and alignment accuracy within 0.25 micron were achieved. This paper summarizes the details of the process results.
A cost-effective way of producing prototype multi-level phase diffractive optical elements is discussed. It is based on combining multiple projects on a single wafer to spread the non- recurring engineering costs over many users thereby reducing the cost. In this paper we discuss issues of cost versus design that were encountered in a multi-project foundry offered by Honeywell.
Spatial coherence in optical processing can be exploited to implement a wide variety of image processing functions. While fully coherent systems tend to receive the most attention, spatially noncoherent systems can often provide equivalent functionality while offering significant advantages over coherent systems with regard to noise performance and system robustness. The term noncoherent includes both partially coherent and fully incoherent illumination. In addition to the noise immunity advantage, noncoherent diffraction-based processors have relaxed requirements on pupil plane spatial light modulator characteristics. In this paper we provide a discussion of the tradeoffs between coherent and noncoherent processing, taking into account the limited performance characteristics of commercially available spatial light modulators. The advantages of noncoherent processing are illustrated with numerical and experimental results corresponding to three different noncoherent architectures.
An optical outer product architecture is presented which performs residue arithmetic operations via position-coded look-up tables. The architecture can implement arbitrary integer- valued functions of two independent variables in a single gate delay. The outer product configuration possesses spatial complexity (gate count) which grows linearly with the size of the modulus, and therefore with the system dynamic range, in contrast to traditional residue look-up tables which have quadratic growth in spatial complexity. The use of linear arrays of sources and modulators leads to power requirements that also grew linearly with the size of the modulus. Design and demonstration of a proof-of-concept experiment are also presented.
Optical techniques have been investigated for information processing since 1950's. This paper provides a unified view of different approaches to optical information processing and computing. Such a view serves to bring out interrelations (similarities and differences) between seemingly different topics and will thereby identify a common technological infrastructure for this broad discipline. The emerging technology of "Smart Pixels" is also discussed and a framework for organizing different "Smart Pixel" approaches presented.
Exemplar-based neural net classifiers enjoy extremely rapid learning procedures and are particularly suitable for analog optical hardware implementations. The winner-take-all (WTA) network is a key component in exemplar-based neural net classifiers as well as in optical competitive learning architectures. In this paper, we present an optical WTA network based on novel electron trapping (ET) materials. The mathematical model has been modified for the optical implementation. All the neuron operations required by the WTA network such as self- excitation, lateral inhibition and thresholding, are performed by a single ET device.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.