A moving space domain window is used to implement a Maximum Average Correlation Height (MACH) filter which
can be locally modified depending upon its position in the input frame. This enables adaptation of the filter dependant on
locally variant background clutter conditions and also enables the normalization of the filter energy levels at each step.
Thus the spatial domain implementation of the MACH filter offers an advantage over its frequency domain
implementation as shift invariance is not imposed upon it. The only drawback of the spatial domain implementation of
the MACH filter is the amount of computational resource required for a fast implementation. Recently an optical
correlator using a scanning holographic memory has been proposed by Birch et al [1] for the real-time implementation of
space variant filters of this type. In this paper we describe the discrimination abilities against background clutter and
tolerance to in-plane rotation, out of plane rotation and changes in scale of a MACH correlation filter implemented in the
spatial domain.
Motivated by the non-linear interpolation and generalization abilities of the hybrid optical neural network
filter between the reference and non-reference images of the true-class object we designed the modifiedhybrid
optical neural network filter. We applied an optical mask to the hybrid optical neural network's filter
input. The mask was built with the constant weight connections of a randomly chosen image included in the
training set. The resulted design of the modified-hybrid optical neural network filter is optimized for
performing best in cluttered scenes of the true-class object. Due to the shift invariance properties inherited by
its correlator unit the filter can accommodate multiple objects of the same class to be detected within an input
cluttered image. Additionally, the architecture of the neural network unit of the general hybrid optical neural
network filter allows the recognition of multiple objects of different classes within the input cluttered image
by modifying the output layer of the unit. We test the modified-hybrid optical neural network filter for
multiple objects of the same and of different classes' recognition within cluttered input images and video
sequences of cluttered scenes. The filter is shown to exhibit with a single pass over the input data
simultaneously out-of-plane rotation, shift invariance and good clutter tolerance. It is able to successfully
detect and classify correctly the true-class objects within background clutter for which there has been no
previous training.
θThe window unit in the design of the complex logarithmic r-θ mapping for hybrid optical neural network filter
can allow multiple objects of the same class to be detected within the input image. Additionally, the
architecture of the neural network unit of the complex logarithmic r-θ mapping for hybrid optical neural
network filter becomes attractive for accommodating the recognition of multiple objects of different classes
within the input image by modifying the output layer of the unit. We test the overall filter for multiple objects
of the same and of different classes' recognition within cluttered input images and video sequences of
cluttered scenes. Logarithmic r-θ mapping for hybrid optical neural network filter is shown to exhibit with a
single pass over the input data simultaneously in-plane rotation, out-of-plane rotation, scale, log r-θ map
translation and shift invariance, and good clutter tolerance by recognizing correctly the different objects
within the cluttered scenes. We record in our results additional extracted information from the cluttered
scenes about the objects' relative position, scale and in-plane rotation.
Moving shadow detection is an important step in automated robust surveillance systems in which a dynamic object is to be segmented and tracked. Rejection of the shadow region significantly reduces the erroneous tracking of non-target objects within the scene. A method to eliminate such shadows in indoor video sequences has been developed by the authors. The objective has been met through the use of a pixel-wise shadow search process that utilizes a computational model in the RGB colour space to demarcate the moving shadow regions from the background scene and the foreground objects. However, it has been observed that the robustness and efficiency of the method can be significantly enhanced through the deployment of a binary-mask based shadow search process. This, in turn, calls for the use of a prior foreground object segmentation technique. The authors have also automated a standard foreground object segmentation technique through the deployment of some popular statistical outlier-detection based strategies. The paper analyses the performance i.e. the effectiveness as a shadow detector, discrimination potential, and the processing time of the modified moving shadow elimination method on the basis of some standard evaluation metrics.
We present a simple computational model that works in the RGB colour space to detect moving shadow pixels in video
sequences of indoor scenes, illuminated in each case by an incandescent source. A channel ratio test for shadows cast on
some common indoor surfaces is proposed that can be appended to the developed scheme so as to reduce the otherwise
high false detection rate. The core method, based on a Lambertian hypothesis, has been adapted to work well for near-matte
surfaces by suppressing highlights. The results reported, based on an extensive data analysis conducted on some
of the crucial parameters involved in the model, not only bring out the subtle details of the parameters, but also remove
the ad hoc nature of the chosen thresholds to a certain extent. The method has been tested on various indoor video
sequences; the results obtained indicate that it can be satisfactorily used to mark or eliminate the strong portion of the
foreground shadow region.
Digital watermarking is a vital process for protecting the copyright of images. This paper presents a method of
embedding a private robust watermark into a digital image. The full complex form the Wiener filter is used to
extract the signal from the watermarked image. This is shown to outperform the more conventional approximate
notation. The results are shown to be extremely noise insensitive.
A method of detecting target objects in cluttered scenes despite any kind of geometrical distortion is demonstrated. Several existing techniques are combined, each one capable of creating invariance to one or more types of distortion of the target object. A MACH filter combined with an SDF creates invariance to orientation while constraining the correlation peak amplitudes and giving good tolerance to background clutter and noise. A log r-θ mapping is employed to give invariance to in-plane rotation and scale.
We propose a hybrid filter, which we call the hybrid optical neural network (HONN) filter. This filter combines the optical implementation and shift invariance of correlator-type filters with the nonlinear superposition capabilities of artificial neural network methods. The filter demonstrates good performance in maintaining high-quality correlation responses and resistance to clutter to nontraining in-class images at orientations intermediate to the training set poses. We present the design and implementation of the HONN filter architecture and assess its object recognition performance in clutter.
Previously we have described a hybrid optical neural network (HONN) filter. The filter is synthesised employing an artificial neural network technique that generates a non-linear interpolation of the intermediate train set poses of the training-set objects but maintains linear shift-invariance which allows potential implementation within a linear optical correlator type architecture. In this paper, we remove the constraints imposed on the filter’s output correlation peak height from the constraint matrix of the synthetic discriminant function used to create the composite filter. We examine the U-HONN filter’s detectability, peak sharpness, within-class distortion range, discrimination ability between an in-class and out-of-class object and the filter’s tolerance to clutter. We assess the behaviour of the U-HONN filter in an open area surveillance application. The filter demonstrates good object detection abilities within cluttered scenes, keeping good quality correlation peak sharpness and detectability throughout all the sets of tests. Thus the U-HONN filter is able to detect and accurately classify the in-class object within different background scenes at intermediate angles to the train-set poses.
The various types of synthetic discriminant function (sdf) filter result in a weighted linear superposition of the training set images. Neural network training procedures result in a non-linear superposition of the training set images or, effectively, a feature extraction process, which leads to better interpolation properties than achievable with the sdf filter. However, generally, shift invariance is lost since a data dependant non-linear weighting function is incorporated in the input data window. As a compromise, we train a non-linear superposition filter via neural network methods with the constraint of a linear input to allow for shift invariance. The filter can then be used in a frequency domain based optical correlator. Simulation results are presented that demonstrate the improved training set interpolation achieved by the non-linear filter as compared to a linear superposition filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.