Our understanding of sensory processing in animals has reached the stage where we can exploit neurobiological
principles in commercial systems. In human vision, one brain structure that offers insight into how we might detect
anomalies in real-time imaging is the superior colliculus (SC). The SC is a small structure that rapidly orients our eyes
to a movement, sound or touch that it detects, even when the stimulus may be on a small-scale; think of a camouflaged
movement or the rustle of leaves. This automatic orientation allows us to prioritize the use of our eyes to raise
awareness of a potential threat, such as a predator approaching stealthily. In this paper we describe the application of a
neural network model of the SC to the detection of anomalies in panoramic imaging. The neural approach consists of a
mosaic of topographic maps that are each trained using competitive Hebbian learning to rapidly detect image features of
a pre-defined shape and scale. What makes this approach interesting is the ability of the competition between neurons to
automatically filter noise, yet with the capability of generalizing the desired shape and scale. We will present the results
of this technique applied to the real-time detection of obscured targets in visible-band panoramic CCTV images. Using
background subtraction to highlight potential movement, the technique is able to correctly identify targets which span as
little as 3 pixels wide while filtering small-scale noise.
KEYWORDS: Commercial off the shelf technology, Image processing, Algorithm development, Defense and security, Imaging systems, Field programmable gate arrays, Sensors, Information security, Surveillance, Detection and tracking algorithms
To address the emergent needs of military and security users, a new design approach has been developed to enable the
rapid development of high performance and low cost imaging and processing systems. In this paper, information about
the "Bespoke COTS" design approach is presented and is illustrated using examples of systems that have been built and
delivered. This approach facilitates the integration of standardised COTS components into a customised yet flexible
systems architecture to realise user requirements within stringent project timescales and budgets. The paper also
discusses the important area of the design trade-off space (performance, flexibility, quality, and cost) and compares the
results of the Bespoke COTS approach to design solutions derived from more conventional design processes.
The increased prevalence of Closed Circuit Television systems has resulted in the necessity to view multiple
simultaneous camera feeds. However in many cases, a single sensor unit with a wide field of view can ensure that the
system operator's situational awareness can be greatly enhanced through the provision of a single continuous panoramic
imaging system. This paper reports on advances that Waterfall Solutions Ltd (WS) has made in the field of wide area
surveillance systems and introduces a low profile, wide field of view sensor system, and associated processing, which
provides solutions to both of these problems.
The Panoramic Area Surveillance System (PASS) provides a unique imaging and processing capability for a wide range
of security and situational awareness applications. PASS comprises a network of multi-modal cameras and its
operational performance is derived from a range of extensive image and data processing functions implemented as realtime
software on commercially available hardware. The development of PASS has offered a number of design
challenges, including the balance between implementation constraints and system performance. Within this paper, the
PASS system and its development challenges are described and its operation is illustrated through a range of application
examples.
Image fusion technology is becoming increasingly used within military systems. However, the migration of the
technology to non-defence applications has been limited, both in terms of functionality and processing performance. In
this paper, the development of a low-cost automatic registration and adaptive image fusion system is described. In order
to fully exploit commercially available processor hardware, an alternative registration and image fusion approach has
been developed and the results of this are presented. Additionally, the software design offers interface flexibility and user
programmability and these features are illustrated through a number of different applications.
The real-time fusion of imagery from two or more complementary sensors offers significant operational benefits for both
operator-in-the-loop and automated processing systems. This paper reports on a new image fusion framework that can be
used to maximise detection, recognition and identification performance within the context of low false-alarm rate
operation. The Intelligent Image Fusion (I2F) architecture presented here allows exploitation of data at the information
level as well as at the pixel-level, and can do so in an adaptable and intelligent manner. In this paper the architecture is
examined in terms of design, applicability to a range of tasks, and performance factors such as adaptability, flexibility
and utility. The relationship between algorithm design and hardware implementation, and the consequential impact on
system performance, is also reviewed. Particular consideration is given to size, weight and power constraints that exist
for some systems and their implications for processing optimisation and implementation on different processing
platforms. Results are presented from the outcome of quantitative studies, development programmes and system trials.
Military helicopter operations are often constrained by environmental conditions, including low light levels and poor
weather. Recent experience has also shown the difficulty presented by certain terrain when operating at low altitude by
day and night. For example, poor pilot cues over featureless terrain with low scene contrast, together with obscuration of
vision due to wind-blown and re-circulated dust at low level (brown out). These sorts of conditions can result in loss of
spatial awareness and precise control of the aircraft. Atmospheric obscurants such as fog, cloud, rain and snow can
similarly lead to hazardous situations and reduced situational awareness.
Day Night All Weather (DNAW) systems applied research sponsored by UK Ministry of Defence (MoD) has developed
a multi-resolution real time Image Fusion system that has been flown as part of a wider flight trials programme
investigating increased situational awareness. Dual-band multi-resolution adaptive image fusion was performed in real-time
using imagery from a Thermal Imager and a Low Light TV, both co-bore sighted on a rotary wing trials aircraft. A
number of sorties were flown in a range of climatic and environmental conditions during both day and night. (Neutral
density filters were used on the Low Light TV during daytime sorties.) This paper reports on the results of the flight trial
evaluation and discusses the benefits offered by the use of Image Fusion in degraded visual environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.