PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 7668, including the Title Page, Copyright information, Table of Contents, Introduction, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Three-dimensional (3D) imaging technologies have considerable potential for aiding military operations in areas such as
reconnaissance, mission planning and situational awareness through improved visualisation and user-interaction. This
paper describes the development of fast 3D imaging capabilities from low-cost, passive sensors. The two systems
discussed here are capable of passive depth perception and recovering 3D structure from a single electro-optic sensor
attached to an aerial vehicle that is, for example, circling a target. Based on this example, the proposed method has been
shown to produce high quality results when positional data of the sensor is known, and also in the more challenging case
when the sensor geometry must be estimated from the input imagery alone. The methods described exploit prior
knowledge concerning the type of sensor that is used to produce a more robust output.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lock-in imaging enables high contrast imaging in adverse conditions by exploiting a modulated light source and
homodyne detection. We report results on a patent pending lock-in imaging system fabricated from commercial-off-theshelf
parts utilizing standard cameras and a spatial light modulator. By leveraging the capabilities of standard parts we
are able to present a low cost, high resolution, high sensitivity camera with applications in search and rescue, friend or
foe identification (IFF), and covert surveillance. Different operating modes allow the same instrument to be utilized for
dual band multispectral imaging or high dynamic range imaging, increasing the flexibility in different operational
settings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A field deployable hyperspectral imager utilizing chromotomography (CT), with a direct vision prism (DVP)
as the dispersive element, has been constructed at the Air Force Institute of Technology (AFIT). A "shift and
add" reconstruction algorithm was used to resolve spectral and spatial content of the collected data. The AFIT
instrument is currently the fastest known imaging DVP based hyperspectral CT instrument of its type and is
a prototype for a space-based system. The imager captured images at rates up to 900 frames per second (fps)
and acquired data cube information in 55 ms, during testing. This instrument has the ability to capture spatial
and spectral data of static and transient scenes. During testing, the imager captured spectral data of a rapidly
evolving scene (a firecracker detonation) lasting approximately 0.12 s. Spectral results included potassium and
sodium emission lines present during the explosion and an absorption feature as the fireball extinguishes. Spatial
and spectral reconstruction of a scene in which an explosion occurs during the middle of the collection period
is also presented in this paper. The instrument is capable of acquiring data required to identify, classify and
characterize transient battlespace events, such as explosions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
VTT Technical Research Centre of Finland has developed a new miniaturized staring hyperspectral imager with a weight
of 350 g making the system compatible with lightweight UAS platforms. The instrument is able to record 2D spatial
images at the selected wavelength bands simultaneously. The concept of the hyperspectral imager has been published in
the SPIE Proc. 74741. The operational wavelength range of the imager can be tuned in the range 400 - 1100 nm and
spectral resolution is in the range 5 - 10 nm @ FWHM. Presently the spatial resolution is 480 × 750 pixels but it can be
increased simply by changing the image sensor. The field of view of the system is 20 × 30 degrees and ground pixel size
at 100 m flying altitude is around 7.5 cm. The system contains batteries, image acquisition control system and memory
for the image data. It can operate autonomously recording hyperspectral data cubes continuously or controlled by the
autopilot system of the UAS. The new hyperspectral imager prototype was first tried in co-operation with the Flemish
Institute for Technological Research (VITO) on their UAS helicopter. The instrument was configured for the spectral
range 500 - 900 nm selected for the vegetation and natural water monitoring applications. The design of the UAS
hyperspectral imager and its characterization results together with the analysis of the spectral data from first test flights
will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
FEATHAR (Fusion, Exploitation, Algorithms, and Targeting for High-Altitude Reconnaissance) is an ONR funded
effort to develop and test new tactical sensor systems specifically designed for small manned and unmanned platforms
(payload weight < 50 lbs). This program is being directed and executed by the Naval Research Laboratory (NRL) in
conjunction with the Space Dynamics Laboratory (SDL). FEATHAR has developed and integrated EyePod, a combined
long-wave infrared (LWIR) and visible to near infrared (VNIR) optical survey & inspection system, with NuSAR, a
combined dual band synthetic aperture radar (SAR) system. These sensors are being tested in conjunction with other
ground and airborne sensor systems to demonstrate intelligent real-time cross-sensor cueing and in-air data fusion.
Results from test flights of the EyePod and NuSAR sensors will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
NuSAR (Naval Research Laboratory Unmanned Synthetic Aperture Radar) is a sensor developed under the ONRfunded
FEATHAR (Fusion, Exploitation, Algorithms, and Targeting for High-Altitude Reconnaissance) program.
FEATHAR is being directed and executed by the Naval Research Laboratory (NRL) in conjunction with the Space
Dynamics Laboratory (SDL). FEATHAR's goal is to develop and test new tactical sensor systems specifically designed
for small manned and unmanned platforms (payload weight < 50 lbs). NuSAR is a novel dual-band (L- and X-band)
SAR capable of a variety of tactically relevant operating modes and detection capabilities. Flight test results will be
described for narrow and wide bandwidth and narrow and wide azimuth aperture operating modes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
EyePod is a compact survey and inspection day/night imaging sensor suite for small unmanned aircraft systems (UAS).
EyePod generates georeferenced image products in real-time from visible near infrared (VNIR) and long wave infrared
(LWIR) imaging sensors and was developed under the ONR funded FEATHAR (Fusion, Exploitation, Algorithms, and
Targeting for High-Altitude Reconnaissance) program. FEATHAR is being directed and executed by the Naval Research
Laboratory (NRL) in conjunction with the Space Dynamics Laboratory (SDL) and FEATHAR's goal is to develop and
test new tactical sensor systems specifically designed for small manned and unmanned platforms (payload weight < 50
lbs). The EyePod suite consists of two VNIR/LWIR (day/night) gimbaled sensors that, combined, provide broad area
survey and focused inspection capabilities. Each EyePod sensor pairs an HD visible EO sensor with a LWIR bolometric
imager providing precision geo-referenced and fully digital EO/IR NITFS output imagery. The LWIR sensor is mounted
to a patent-pending jitter-reduction stage to correct for the high-frequency motion typically found on small aircraft and
unmanned systems. Details will be presented on both the wide-area and inspection EyePod sensor systems, their modes
of operation, and results from recent flight demonstrations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the advances in focal plane, electronics and memory storage technologies, wide area and persistence
surveillance capabilities have become a reality in airborne ISR. A WAS system offers many benefits in comparison with
the traditional airborne image capturing systems that provide little data overlap, both in terms of space and time. Unlike
a fix-mount surveillance camera, a persistence WAS system can be deployed anywhere as desired, although the platform
typically has to be in motion, say circling above an area of interest. Therefore, WAS is a perfect choice for surveillance
that can provide near real time capabilities such as change detection and target tracking. However, the performance of a
WAS system is still limited by the available technologies: the optics that control the field-of-view, the electronics and
mechanical subsystems that control the scanning, the focal plane data throughput, and the dynamics of the platform all
play key roles in the success of the system. It is therefore beneficial to develop a simulated version that can capture the
essence of the system, in order to help provide insights into the design of an optimized system. We describe an approach
to the simulation of a generic WAS system that allows focal plane layouts, scanning patterns, flight paths and platform
dynamics to be defined by a user. The system generates simulated image data of the area ground coverage from
reference databases (e.g. aerial imagery, and elevation data), based on the sensor model. The simulated data provides a
basis for further algorithm development, such as image stitching/mosaic, registration, and geolocation. We also discuss
an algorithm to extract the terrain elevation from the simulated data, and to compare that with the original DEM data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present an approach to integrate sensors to meet the demanding requirements of Quick Reaction
Capability (QRC) airborne programs. Traditional airborne sensors are generally highly integrated and incorporate
custom sensor technologies and interfaces. Custom solutions and new technologies often require significant engineering
to achieve a high technology readiness level (TRL) and to meet the overall mission objective. Our approach differs from
traditional approaches in that we strive to achieve an integrated solution through regular review, assessment, and
identification of relevant industry "best athlete" technologies. Attention is focused on solution providers that adhere to
standard interfaces and formats, incorporate non-proprietary techniques, are deemed highly-reliable/repeatable, and
enable assembly production. Processes and engineering tools/methods have traditionally been applied to dozens of
longer-acquisition space-based ISR programs over 50 years. We have recently leveraged these techniques to solve
airborne Intelligence, Surveillance and Reconnaissance (ISR) mission challenges. This presentation describes and
illustrates key aspects and examples of these techniques, solving real-world airborne mission needs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Continuum emission is predominant in fireball spectral phenomena and in some demonstrated cases, fine detail in the
temporal evolution of infrared spectral emissions can be used to estimate size and chemical composition of the device.
Recent work indicates that a few narrow radiometric bands may reveal forensic information needed for the explosive
discrimination and classification problem, representing an essential step in moving from "laboratory" measurements
to a rugged, fieldable system. To explore phenomena not observable in previous experiments, a high speed (10μs
resolution) radiometer with four channels spanning the infrared spectrum observed the detonation of nine home made
explosive (HME) devices in the < 100lb class. Radiometric measurements indicate that the detonation fireball is well
approximated as a single temperature blackbody at early time (0 < t ⪅ 3ms). The effective radius obtained from absolute
intensity indicates fireball growth at supersonic velocity during this time. Peak fireball temperatures during this initial
detonation range between 3000.3500K. The initial temperature decay with time (t ⪅ 10ms) can be described by a
simple phenomenological model based on radiative cooling. After this rapid decay, temperature exhibits a small, steady
increase with time (10 ⪅ t ⪅ 50ms) and peaking somewhere between 1000.1500K-likely the result of post-detonation
combustion-before subsequent cooling back to ambient conditions . Radius derived from radiometric measurements
can be described well (R2 > 0.98) using blast model functional forms, suggesting that energy release could be estimated
from single-pixel radiometric detectors. Comparison of radiometer-derived fireball size with FLIR infrared imagery
indicate the Planckian intensity size estimates are about a factor of two smaller than the physical extent of the fireball.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Military test centers require detailed site descriptions. Test agencies demand significant written and visual information of
test sites in order to facilitate successful test preparation and execution. New terrestrial imaging techniques (360 degree
FOV collection) have recently become feasible to collect in the field. Combined with GIS and mapping applications,
image and video data is now provided to test agencies for their use. Test sites for this study include locations in Alaska
and Panama with planned image data collection in Arizona and Maryland.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An overview of a high performance zoom camera will be presented. Performance achievements including zoom
(magnification range), mass, bore sight, space envelope and environment will be discussed. Optical mounting
techniques and flexural decoupling of components for large temperature ranges will be presented. Precision trajectory
and positioning of multiple moving lens groups will be reviewed and lead screw decoupling methods providing axial
stiffness with radial compliance will be illustrated. A mechanical system interface with high stiffness and thermal
compliance for azimuth and elevation adjustments will be given. Finally, the paper will conclude with a review of
lessons learned, including lead screw decoupling and aligning multiple static and moving lens groups.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A pushbroom MSI sensor collects image data from the ground, parallel to the flight path, at a specific point
angle. Images taken at two instances of time while the scanner moves with the platform usually have a spatial offset,
and need to be registered before they can be compared for any changes between the two images. Moving target
detection is a special case of change detection that requires the time between frames be small enough that a moving
vehicle remains in close proximity on the two frames. We propose an algorithm for the detection of moving targets in a
multi-band line scanning pushbroom sensor. Ideally, change detection works best when images have the same spectral
bandwidth and are perfectly registered to one another, since differencing the two images automatically removes most of
the common background signal. However, this is not always the case. For example, the sensor considered here has
different bandwidths for its component bands, and since it is a line-scanner it is much more challenging regarding image
registration than a framed-based scanner. In this study, we will use simulated data of the same bandwidth to demonstrate
the fundamental algorithm of detection and velocity calculation. The velocity calculation is that of distance divided by
time; but, depending on the focal plane layout and other operating considerations and conversion between image space
to physical units, this calculation is not as simple as it seems. We will also discuss our effort in applying our algorithm
to real line-scan imagery, of different bandwidths in the two channels. We will show the extra image processing efforts
needed to make it work, and show some of the test results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a feature-based approach for vehicle detection in aerial imagery with 11.2 cm/pixel resolution.
The approach is free of all constraints related to the vehicles appearance. The scale-invariant feature
transform (SIFT) is used to extract keypoints in the image. The local structure in the neighbouring of the
SIFT keypoints is described by 128 gradient orientation based features. A Support Vector Machine is used
to create a model which is able to predict if the SIFT keypoints belong to or not to car structures in the
image. The collection of SIFT keypoints with car label are clustered in the geometric space into subsets and
each subset is associated to one car. This clustering is based on the Affinity Propagation algorithm
modified to take into account specific spatial constraint related to geometry of cars at the given resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the aerial images of 11.2 cm/pixel resolution the car components that can be seen are only large parts of the car such
as car bodies, windshields, doors and shadows. Furthermore, these components are distorted by low spatial resolution,
low color contrast, specular reflection and viewpoint variation. We use the mean shift procedure for robust segmentation
of the car parts in the geometric and color joint space. This approach is robust, efficient, repeatable and independent of
the threshold parameters. We introduce a hierarchical segmentation algorithm with three consecutive mean-shift
procedures. Each is designed with a specific bandwidth to segment a specific car part, whose size is estimated a priori,
and is followed by a support vector machine in order to detect this car part, based on the color features and the
geometrical moment based features. The procedure starts with the largest car parts, which are then removed from the
segmented region lists after the detection to avoid over-segmentation of large regions with the mean-shift using smaller
bandwidth values. Finally we detect and count the cars in the image by combining the detected car parts according to the
spatial relations. Experiment results show a good performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a relational graph based approach to track thousands of vehicles from persistent wide area airborne
surveillance (WAAS) videos. Due to the low ground sampling distance and low frame rate, vehicles usually have small
size and may travel a long distance between consecutive frames, WAAS videos pose great challenges to correct
associate existing tracks with targets. In this paper, we explore road structure information to regulate both object based
vertex matching and pair-wise edge matching schemes in a relational graph. The proposed relational graph approach
then unifies these two matching schemes into a single cost minimization framework to produce a quadratic optimized
association result. The experiments on hours of real WAAS videos demonstrate the relational graph matching framework
effectively improves vehicle tracking performance in large scale dense traffic scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a novel system is presented to detect and track multiple targets in Unmanned Air Vehicles
(UAV) video sequences. Since the output of the system is based on target motion, we first segment foreground
moving areas from the background in each video frame using background subtraction. To stabilize the video, a
multi-point-descriptor-based image registration method is performed where a projective model is employed to
describe the global transformation between frames. For each detected foreground blob, an object model is used
to describe its appearance and motion information. Rather than immediately classifying the detected objects as
targets, we track them for a certain period of time and only those with qualified motion patterns are labeled as
targets. In the subsequent tracking process, a Kalman filter is assigned to each tracked target to dynamically
estimate its position in each frame. Blobs detected at a later time are used as observations to update the state
of the tracked targets to which they are associated. The proposed overlap-rate-based data association method
considers the splitting and merging of the observations, and therefore is able to maintain tracks more consistently.
Experimental results demonstrate that the system performs well on real-world UAV video sequences. Moreover,
careful consideration given to each component in the system has made the proposed system feasible for real-time
applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, we see an increase of interest for intelligent and efficient tracking systems in surveillance applications.
Many of the proposed techniques are designed for static cameras environments. When the camera is moving, tracking
moving objects becomes more difficult and many techniques fail to detect and track the desired targets. The problem
becomes more complex when we want to track a specific object in real-time using a Pan and Tilt camera system (PTU).
Tracking a target using a PTU in order to keep the target within the image is important in surveillance applications.
When a target is detected, the possibility of automatically tracking it and keeping it within the image until action is taken
is very important for security personnel working in sensitive areas.
This work presents a real-time tracking system based on particle filters. The proposed system permits the detection and
continuous tracking of a selected target using a Pan and Tilt camera platform. A novel simple and efficient approach for
dealing with occlusions is presented. Also a new intelligent forget factor is introduced in order to take into account target
shape variations and avoid learning non desired objects. Tests conducted in outdoor operational scenarios show the
efficiency and robustness of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Volitional search systems that assist the analyst by searching for specific targets or objects such as vehicles, factories,
airports, etc in wide area overhead imagery need to overcome multiple problems present in current manual and automatic
approaches. These problems include finding targets hidden in terabytes of information, relatively few pixels on targets,
long intervals between interesting regions, time consuming analysis requiring many analysts, no a priori representative
examples or templates of interest, detecting multiple classes of objects, and the need for very high detection rates and
very low false alarm rates.
This paper describes a conceptual analyst-centric framework that utilizes existing technology modules to search and
locate occurrences of targets of interest (e.g., buildings, mobile targets of military significance, factories, nuclear plants,
etc.), from video imagery of large areas. Our framework takes simple queries from the analyst and finds the queried
targets with relatively minimum interaction from the analyst. It uses a hybrid approach that combines biologically
inspired bottom up attention, socio-biologically inspired object recognition for volitionally recognizing targets, and
hierarchical Bayesian networks for modeling and representing the domain knowledge. This approach has the benefits of
high accuracy, low false alarm rate and can handle both low-level visual information and high-level domain knowledge
in a single framework. Such a system would be of immense help for search and rescue efforts, intelligence gathering,
change detection systems, and other surveillance systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present four new change detection methods that create an automated change map from a probability map. In this
case, the probability map was derived from a 3D model. The primary application of interest is aerial photographic
applications, where the appearance, disappearance or change in position of small objects of a selectable class (e.g., cars)
must be detected at a high success rate in spite of variations in magnification, lighting and background across the image.
The methods rely on an earlier derivation of a probability map. We describe the theory of the four methods, namely
Bernoulli variables, Markov Random Fields, connected change, and relaxation-based segmentation, evaluate and
compare their performance experimentally on a set probability maps derived from aerial photographs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The following material is given to address the effect of low slant angle on video interpretability: 1) an equation for the
minimum slant angle as a function of field-of-view to prevent no more than a &sqrt2; change in GSD across the scene; 2)
evidence for reduced situational awareness due to errors in perceived depth at low slant angle converting to position
errors; 3) an equation for optimum slant angle and target orientation with respect to maximizing exposed target area; 4)
the impact of the increased probability of occlusion as a function of slant angle; 5) a derivation for the loss of resolution
due to atmospheric turbulence and scattering. In addition, modifications to Video-NIIRS for low slant angle are
suggested. The recommended modifications for low-slant angle Video-NIIRS are: 1) to rate at or near the center of the
scene; and 2) include target orientations in the Video-NIIRS criteria.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advances have been made in short wave infrared (SWIR) imaging technology to address the most
demanding imaging and surveillance applications. Multiple techniques have been developed and deployed
in Goodrich's SWIR indium gallium arsenide (InGaAs) cameras to optimize the dynamic range
performance of standard, commercial off-the-shelf (COTS) products. New developments have been
implemented on multiple levels to give these cameras the unique ability to automatically compensate for
changes in light levels over more than 5 orders of magnitude, while improving intra-scenic dynamic range.
Features recently developed and implemented include a new Automatic Gain Control (AGC) algorithm,
image flash suppression, and a proprietary image-enhancement algorithm with a simplified but powerful
user command structure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image mosaicking is the process of piecing together multiple video frames or still images from a moving camera
to form a wide-area or panoramic view of the scene being imaged. Mosaics have widespread applications in many
areas such as security surveillance, remote sensing, geographical exploration, agricultural field surveillance, virtual
reality, digital video, and medical image analysis, among others. When mosaicking a large number of still images
or video frames, the quality of the resulting mosaic is compromised by projective distortion. That is, during the
mosaicking process, the image frames that are transformed and pasted to the mosaic become significantly scaled
down and appear out of proportion with respect to the mosaic. As more frames continue to be transformed,
important target information in the frames can be lost since the transformed frames become too small, which
eventually leads to the inability to continue further. Some projective distortion correction techniques make
use of prior information such as GPS information embedded within the image, or camera internal and external
parameters. Alternatively, this paper proposes a new algorithm to reduce the projective distortion without
using any prior information whatsoever. Based on the analysis of the projective distortion, we approximate the
projective matrix that describes the transformation between image frames using an affine model. Using singular
value decomposition, we can deduce the affine model scaling factor that is usually very close to 1. By resetting the
image scale of the affine model to 1, the transformed image size remains unchanged. Even though the proposed
correction introduces some error in the image matching, this error is typically acceptable and more importantly,
the final mosaic preserves the original image size after transformation. We demonstrate the effectiveness of this
new correction algorithm on two real-world unmanned air vehicle (UAV) sequences. The proposed method is
shown to be effective and suitable for real-time implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remote sensing is widely applied to provide information of areas with
limited ground access with applications such as to assess the destruction from
natural disasters and to plan relief and recovery operations. However, the data
collection of aerial digital images is constrained by bad weather, atmospheric
conditions, and unstable camera or camcorder. Therefore, how to recover the
information from the low-quality remote sensing images and how to enhance the
image quality becomes very important for many visual understanding tasks, such
like feature detection, object segmentation, and object recognition. The quality of
remote sensing imagery can be improved through meaningful combination of the
employed images captured from different sensors or from different conditions
through information fusion. Here we particularly address information fusion to
remote sensing images under multi-resolution analysis in the employed image
sequences. The image fusion is to recover complete information by integrating
multiple images captured from the same scene. Through image fusion, a new image
with high-resolution or more perceptive for human and machine is created from a
time series of low-quality images based on image registration between different
video frames.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and
civilian applications, including surveillance, target recognition, border protection, forest fire monitoring,
traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using
digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good"
mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter
associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well
as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to
detect the features consistent between video frames. Utilizing these features, the next step is to estimate the
homography between two consecutives video frames, perform warping to properly register the image data,
and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great
deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic
on a single processor. Modern graphics processing units (GPUs) offer computational performance that far
exceeds current CPU technology, allowing for real-time operation.
This paper presents the development of a GPU-accelerated digital video mosaicking implementation and
compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS
aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we
can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video
capture of 30 frames per second is feasible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
UAV have a growing importance for reconnaissance and surveillance. Due to improved technical capability also small
UAVs have an endurance of about 6 hours, but less sophisticated sensors due to strong weight limitations. This puts a
high strain and workload on the small teams usually deployed with such systems. To lessen the strain for photo
interpreters and to improve the capability of such systems we have developed and integrated automatic image
exploitation algorithms. An import aspect is the detection of moving objects to give the photo interpreter (PI) hints were
such objects are. Mosaiking of imagery helps to gain better oversight over the scene. By computing stereo-mosaics from
mono-ocular video-data also 3-d-models can be derived from tactical UAV-data in a further processing step. A special
instrument of gaining oversight is to use multi-temporal and multifocal images of video-sensors with different resolution
of the platform and to fusion them into one image. This results in a good situation awareness of the scene with a light-weight
sensor-platform and a standard video link.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Monitoring video data sources received from UAVs is especially challenging because of the quality of the video
received. Due to the individual characteristics of the unmanned platform and the changing environment, the important
elements in the scene are not always observable or easily identified. In addition to typical sensor noise, significant
image degradation can occur during transmission of the video from an airborne platform. Interference from other
transmitters, analog noise in the embedded avionics, and multi-path effects can corrupt the video signal during
transmission, introducing distortion in the video received at the ground. In some cases, the loss of signal is so severe; no
information is received in portions of an image frame. To improve the corrupt video, we capitalize on the oversampling
in the temporal domain (across video frames), applying a data fusion approach to de-noise the video. The resulting
video retains the significant scene content and dynamics, without distracting artifacts from noise. This allows humans to
easily ingest the information from the video, and make it possible to utilize further video exploitation algorithms such as
object detection and tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe the Materials and Components for Missile (MCM) Innovation and Technology Partnership (ITP)
programme. The MCM - ITP is an Anglo-French research programme started in 2007 to encourage early stage research
in future weapons technology.
SELEX Galileo leads the domain related to the Electro-Optic Sensor within future guided weapons. Our objective is to
capture cutting edge ideas in the research community and develop them for exploitation in future missile generations.
We provide a view of two areas where we believe development of enhanced seeker capability should be focussed. Examples of current research within and outside the ITP that may be harnessed to deliver these capabilities are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper details and evaluates a system that aims to provide continuous robust localisation ('tracking') of vehicles
throughout the scenes of aerial video footage captured by Unmanned Aerial Vehicles (UAVs). The scientific field of
UAV object tracking is well studied in the field of computer vision, with a variety of solutions offered. However,
rigorous evaluation is infrequent, and further novelty lies here in our exploration of the benefits of combined modality
processing, in conjunction with a proposed adaptive feature weighting technique. Building on our previously reported
framework for object-tracking in multi-spectral video1, moving vehicles are initially located by exploiting their intrascene
displacement within a camera-motion compensated video-image domain. For each detected vehicle, a
spatiogram2-based representation is then extracted, which is a representative form that aims to bridge the gap between
the 'coarseness' of histograms and the 'rigidity' of pixel templates. Spatiogram-based region matching then ensues for
each vehicle, towards determining their new locations throughout the subsequent frames of the video sequence. The
framework is flexible in that, in addition to the exploitation of traditional visible spectrum features, it can accommodate
the inclusion of additional feature sources, demonstrated here via the attachment of an infrared channel. Furthermore, the
system provides the option of enabling an adaptive feature weighting mechanism, whereby the transient ability of certain
features to occasionally outperform others is exploited in an adaptive manner, to the envisaged benefit of increased
tracking robustness. The system was developed and tested using the DARPA VIVID2 video dataset3, which is a suite of
multi-spectral (visible and thermal infrared) video files captured from an airborne platform flying at various altitudes.
Evaluation of the system is quantitative, which differentiates it from a large portion of the existing literature, whilst the
results observed serve to further reveal the challenging nature of this problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since actual application of Target Recognition is often under outdoor natural circumstances, existence and variance of
illumination can not be obviously neglected. Common-used target recognition algorithm having been applied to images
under diverse illumination, the effect seems undesirable. Thus, the authors have applied Retinex Theory to amend the
target recognition algorithm based on wavelet moment. Applying the amended algorithm to marine images, experimental
results have shown a notably optimizing effect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent decades, hyperspectral Images (HSI) have been widely exploited in many fields for rich information containing
in them. Many algorithms have been brought out for endmember extracting, among which, VCA algorithm performs a
better precision and lower complexity. However, endmembers of the same HIS extracted with traditional VCA algorithm
are not always the same in different runs. After deeply analyzing, the authors have proposed an improved VCA algorithm
to resolve that shortcoming. For verification, experiment and comparative study have been performed. On conclusion, the
improved VCA algorithm has manifested higher efficiency and accuracy than the traditional one.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There exists a wealth of information in the scientific literature on the physical properties and device characterization
procedures for complementary metal oxide semiconductor (CMOS), charge coupled device (CCD) and avalanche
photodiode (APD) format detectors. Numerous papers and books have also treated photocathode operation in the
context of photomultiplier tube (PMT) operation for either non imaging applications or limited night vision capability.
However, much less information has been reported in the literature about the characterization procedures and properties
of photocathode detectors with novel cross delay line (XDL) anode structures. These allow one to detect single photons
and create images by recording space and time coordinate (X, Y & T) information. In this paper, we report on the
physical characteristics and performance of a cross delay line anode sensor with an enhanced near infrared wavelength
response photocathode and high dynamic range micro channel plate (MCP) gain (> 106 ) multiplier stage. Measurement
procedures and results including the device dark event rate (DER), pulse height distribution, quantum and electronic
device efficiency (QE & DQE) and spatial resolution per effective pixel region in a 25 mm sensor array are presented.
The overall knowledge and information obtained from XDL sensor characterization allow us to optimize device
performance and assess capability. These device performance properties and capabilities make XDL detectors ideal for
remote sensing field applications that require single photon detection, imaging, sub nano-second timing response, high
spatial resolution (10's of microns) and large effective image format.ÿ
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Monitoring the soil composition of agricultural land is important for maximizing crop-yields. Carinthian Tech
Research, Schiebel GmbH and Quest Innovations B.V. have developed a multi-spectral imaging system that
is able to simultaneously capture three visible and two near infrared channels. The system was mounted on
a Schiebel CAMCOPTER® S-100 UAV for data acquisition. Results show that the system is able to classify
different land types and calculate vegetation indices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.