PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Building practical intelligent-system algorithms requires appropriate tools for capturing the basic features of highly complex real-world environments. One of the most important of these tools, probability theory, is a calculus of events (e.g. EVENT = 'A fire-control radar of type A is detected' with Prob(EVENT) = 0.80). Conditional Event Algebra (CEA) is a relatively new inference calculus which rigorously extends standard probability theory to include events which are contingent--e.g. rules such as `If fire-control radar A is detected, then weapon B will be launched'; or conditionals such as `observation Z given target state X.' CEA allows one to (1) probabilistically model a contingent event; (2) assign a probability Prob(COND_EVENT) equals 0.50 to it; and (3) compute with such conditional events and probabilities using the same basic rules that govern ordinary events and probabilities. Since CEA is only about ten years old, it has achieved visibility primarily among specialists in expert-systems theory and mathematical logic. Recently, however, it has become clear that CEA has potentially radical implications for engineering practice as well. The purpose of this paper is to bring this promising new tool to the attention of the wider engineering community. We will give a tutorial introduction to CEA, based on simple motivational examples, and describe its potential applications in a number of practical engineering problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in conditional event algebra and random set modeling hold the promise of enabling a systematic and consistent approach to information fusion in the context of statistical signal processing. In this paper we consider three potential applications of this approach: (1) iterated image estimation; (2) prior models for medical image reconstruction; and (3) knowledge-aided modeling of complex systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Beginning with work in the mid 1970's and early 1980's, it was discovered that fundamental homomorphic-like relations exist between many first order fuzzy logic concepts and naturally corresponding probability ones via the one-point coverage events for appropriately chosen random subsets of the domains of the fuzzy sets considered. This paper first extends and modifies the above-mentioned homomorphic-like relations previously established. It also introduces a number of new homomorphic-like relations between fuzzy logic concepts and probability, utilizing two recently derived subfields of probability theory: conditional and relational event algebra. In addition, a newly invigorated branch of probability theory dealing with second order probabilities (or `probabilities of probabilities') is shown to be applicable to treating certain deduction problems involving conditioning of populations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper considers the fuzzy logic analogue of a basic alternative conditioning approach to bayesian updating of a parameter of interest. This is useful, when only linguistic information, or a combination of linguistic and stochastic information, is present, and the fuzzy set analogue of either the input conditional or prior distributions are not obtainable. For the first time, in conjunction with new results unifying conditional event algebra, conditional fuzzy sets, and one-point coverage representations of fuzzy sets and operations, a completely rigorous justification is presented for the above fuzzy logic analogue.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Finite-Set Statistics (FISST) is a direct generalization of conventional single-sensor, single-target statistics to the multisensor-multitarget realm. In particular, it deals with multitarget problems via multitarget Bayesian recursive nonlinear filtering (a direct generalization of the Bayesian recursive nonlinear filtering equations to the multitarget realm). The purpose of this paper is to (1) offer a brief bibliographical history of multitarget Bayesian recursive nonlinear filtering, and (2) describe the application of FISST techniques to the modeling of dynamic multitarget scenarios, e.g. scenarios in which targets can change mode or appear/disappear from one time-step to the next. Such problems can be addressed by FISST multitarget Markov motion models that take account of (among other things) the fact that the actual number of targets in a scenario (and not just the estimated number of targets) is a stochastic quantity--i.e., can randomly vary over time. We show that, in particular, there is a broad family of realistic multitarget density functions with the following property: If both the current multitarget posterior density and the multitarget Markov transition density belong to this family, then so does the time-update of the multitarget posterior. One result is potentially great computational savings in more general multitarget filtering problems. To better clarify some of conceptual key points underlying FISST, we also contrast it with an ad hoc approach, `generalized EAMLE,' and comment on `joint multitarget probabilities', a special case of certain core FISST concepts under a new name.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In past presentations at this and other conferences and in the recent book `Mathematics of Data Fusion,' we have introduced `finite set statistics' or FISST (a direct generalization of conventional single-sensor, single-target statistics to the multisensor-multitarget realm). We have also shown how FISST provides a unified foundation for the following aspects of multisource-multitarget data fusion: detection, identification, tracking, multi-evidence accrual, sensor management, performance estimation, and decision- making. In this paper we illustrate the FISST approach by showing how conventional filtering and estimation theory can be generalized to tracking or target I.D. problems involving highly ambiguous evidence. In particular, we illustrate the FISST approach on three simple model problems involving (1) target I.D. with a very low-quality RWR (radar warning receiver) sensor, and (2) target tracking via fusion of (simulated) radar reports with (simulated) English-language reports.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During the last two decades I.R. Goodman, H.T. Nguyen and others have shown that several basic aspects of expert- systems theory-fuzzy logic, Dempster-Shafer evidence theory, and rule-based inference-can be subsumed within a completely probabilistic framework based on random set theory. In addition, it has been shown that this body of research can be rigorously integrated with multisensor, multitarget filtering and estimation using a special case of random set theory called `Finite-Set Statistics' (FISST). In particular, FISST allows the basis for standard tracking and I.D. algorithms--nonlinear filtering theory and estimation theory--to be extended to the case when evidence can be highly `ambiguous' (imprecise, vague, contingent, etc.). This paper summarizes preliminary results in applying the FISST filtering approach to the problem of identifying ground targets from Synthetic Aperture Radar data that is `ambiguous' because of Extended Operating Conditions, e.g. when images are corrupted by effects such as dents, mud, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real-time fusion algorithms are often patchworks of loosely integrated sub-algorithms, each of which addresses a separate fusion objective and each of which may process only one kind of evidence. Because these objectives are often in conflict, adaptive methods (e.g. internal monitoring and feedback control to dynamically reconfigure algorithms) are often necessary to ensure optimal performance. This paper describes a different approach to adaptive fusion in which explicit algorithm reconfiguration is largely unnecessary because conflicting objectives are simultaneously resolved within a self-reconfiguring, optimally integrated algorithm. This approach is based on Finite-Set Statistics (FISST), a special case of random set theory that unifies many aspects of multisource-multitarget data fusion, including detection, tracking, identification, and evidence accrual. This paper describes preliminary results in applying a FISST-based filtering approach to a ground-based, single-target identification scenario based on the fusion of several types of synthetic message-based data from several sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to meaningfully assess the competence of algorithms is a crucial part of developing and comparing practical systems. Moreover, the importance of metrology has increased because of the emergence of fusion strategies such as adaptive fusion and fusion management, which require that Measures of Performance, Effectiveness, and Robustness be examined with greater seriousness. It would seem, therefore, that few things could be as important as achieving a scientific understanding of measurement. In reality, probably no other vital aspect of multisource-multisensor data fusion has been less glamorous, more heuristic, more poorly understood, and less a subject of deep examination than has been metrology. In this paper we present preliminary findings of an ongoing project on scientific performance evaluation for multisource-multisensor data fusion, sponsored by Air Force Research Laboratories, Rome NY.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method of classifying objects is presented. Rather than trying to form the classifier in one step or in one training algorithm, it is done in a series of small steps, or nibbles. This leads to an efficient and versatile system that is trained in series with single one-shot examples but applied in parallel, is implemented with single layer perceptrons, yet maintains its fully sequential hierarchical structure. Based on the nibbling algorithm, a basic new method of target reference filter management is described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The study reported in this paper represents a continuation of our previous work on the application of Hidden Markov models (HMMs) to the translational and rotational invariant classification of SAR targets. The traditional method of making classification decisions using an HMM does not achieve the desired objective of minimizing the number of misclassifications. We present a novel technique that minimizes the probability of misclassification error. This approach, which is an adaptation of an existing Minimum Classification Error strategy, is globally optimal. The proposed method applies basic principles of pattern recognition to reduce the expected misclassification rate by dynamically perturbing the HMM parameters using a constraint on a cross-entropy measure and distance separation between pairs of HMM models. Like the traditional implementation of an HMM, our new formulation can still be implemented using an efficient forward-backward algorithm for estimating the HMM parameters. We tested our classifier on a public mixed- target MSTAR database and compared our approach to the original HMM approach trained by using a maximum likelihood criterion. The results indicate a significant improvement over the original HMM approach. Current scores from our method are about in excess of 90% on testing data sets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detecting targets occluded by foliage in Foliage penetrating (FOPEN) Ultra-Wide-Band Synthetic Aperture Radar (UWB SAR) images is an important and challenging problem. Given the different nature of FOPEN SAR imagery and very low signal- to-clutter ratio in UWB SAR data, conventional detection algorithms usually fail to yield robust target detection results on raw data with minimum false alarms. Hence improving the resolving power by means of a super-resolution algorithm plays an important role in hypothesis testing for false alarm mitigation and target localization. In this paper we present a new single-frame super-resolution algorithm based on estimating the polyphase components of the observed signal projected on an optimal basis. The estimated polyphase components are then combined into a single super-resolved image using the standard inverse polyphase transform, leading to improved target signature while suppressing noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hidden Markov models (HMMs) are probabilistic finite state machines that can be used to represent random discrete time data. HMMs produce data through the use of one or more `observable' random processes. An additional `hidden' Markov process controls, which of the `observable' random processes is used to produce an individual data observation. Helicopter radar signatures can be represented as quasi- periodic 1D discrete time series that can be analyzed using HMMs. In the HMM helicopter detection and classification algorithm developed in this study, the states of the `hidden portion' of the HMM were used to represent time dependence alignments between the radar and helicopter rotor structures. For example, the times when specular reflections occur were used to define a `blade-fish' state. Since blade- flash frequency, and the corresponding non-blade-flash state duration, is an important feature in helicopter detection and classification. HMMs that allowed direct specification of state duration probabilities were used in this study. The HMM approach was evaluated using X-Band radar data from military helicopters recorded at Ft. A.P. Hill. After initial adaptive clutter suppression and blade-flash enhancement preprocessing, a set of approximately 1,000 raw in-phase and quadrature data records were analyzed using the HMM approach. A correct target classification rate that varied between 98% for a PRF of 10 KHz to 91% at a 2.5 KHz PRF was achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Estimating pose and location of a detected target is an integral part of the target recognition process. In this paper we address the problem of estimating these parameters using images collected via stationary or moving sensors. Taking a Bayesian approach, we define a posterior on the special Euclidean group, which models the target orientation and position, and an optimal estimator in the minimum mean squared error sense. In addition, we derive an achievable lower bound on the estimation errors, independent of an algorithm, and analyze this bound by varying the sensor noise. This bound provides a tool for studying the algorithmic performance versus resources trade-off in multi- sensor, multi-frame applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper summarizes a study investigating the potential insertion of embedded high performance computing technology into a legacy processor architecture such as that found in U.S. Army ground platforms. The intended task to be performed by the embedded high performance computing technology is that of aiding human operators in their target acquisition activities by cueing these operators to potential targets. Insertion of this technology is constrained by two sets of requirements. The first set of requirements is given by the ground platform, such as a tank. The second set of requirements is formed from the mathematical algorithms used to process the sensor data. Results of this joint U.S. Army CECOM Night Vision & Electronic Sensors Directorate and Defense Advanced Research Projects Agency study show no commercially available computing technology can meet all requirements. The study did show that inclusion of adaptable computing technology allowed most of the requirements to be met. This paper summarizes the methodology and models used to obtain these results as well as the impact of adaptable computing technology to embedded ATR processors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the field of remote sensing, image-based object recognition can benefit from direct measurement of characteristic dimensions. But in many cases image resolution and collateral data about the mapping properties do not provide sufficient precision for the intended purposes of object identification. In such cases, the recognition performance can often be increased significantly by coupling several characteristic dimensions and taking their mutual relations into account.
The approach proposed in this paper defines so-called `keypoint models' that describe certain object classes by the geometrical arrangement of characteristic features, denoted as keypoints. A feature space is spanned by the normalized distances of these keypoints. The complexity of the models as expressed by the number of keypoints is scalable and gets selected according to the specific recognition needs. Different model variants cover the significance of object features according to the spectral sensitivity of the sensor. Keypoints are intended to be marked interactively by an image analyst while taking into account the inaccuracies caused by image resolution and other possible ambiguities.
We demonstrate our approach in the example domain of airplanes where the complexity of our actually most sophisticated model amounts to ten keypoints. We place special emphasis on the aspects of intuitive usability for image analysts working under time pressure. Perspectives of integrating automatic feature-extraction techniques are also discussed briefly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recognition of target objects in remotely sensed imagery required detailed knowledge about the target object domain as well as about mapping properties of the sensing system. The art of object recognition is to combine both worlds appropriately and to provide models of target appearance with respect to sensor characteristics. Common approaches to support interactive object recognition are either driven from the sensor point of view and address the problem of displaying images in a manner adequate to the sensing system. Or they focus on target objects and provide exhaustive encyclopedic information about this domain.
Our paper discusses an approach to assist interactive object recognition based on knowledge about target objects and taking into account the significance of object features with respect to characteristics of the sensed imagery, e.g. spatial and spectral resolution. An `interactive recognition assistant' takes the image analyst through the interpretation process by indicating step-by-step the respectively most significant features of objects in an actual set of candidates. The significance of object features is expressed by pregenerated trees of significance, and by the dynamic computation of decision relevance for every feature at each step of the recognition process.
In the context of this approach we discuss the question of modeling and storing the multisensorial/multispectral appearances of target objects and object classes as well as the problem of an adequate dynamic human-machine-interface that takes into account various mental models of human image interpretation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of the research presented in this paper is to find out whether automatic classification of ships from Forward Looking InfraRed images is feasible in maritime patrol aircraft. An image processing system has been developed for this task. It includes iterative shading correction and a top hat filter for the detection of the ship. It uses a segmentation algorithm based on the gray value distribution of the waves and the Hough transform to locate the waterline of the ship.
A model has been developed to relate the size of the ship and the angle between waterline and horizon in image coordinates, to the real-life size and aspect angle of the ship. The model uses the camera elevation and distance to the ship. A data set was used consisting of two civil ships and four different frigates under different aspect angles and distances. From each of these ship images, 32 features were calculated, among which are the apparent size, the location of the hot spot and of the superstructures of the ship, and moment invariant functions.
All features were used in feature selection processing using both the Mahalanobis and nearest neighbor (NN) criteria to forward, backward, and branch and bound feature selection procedures, to find the most significant features.
Classification has been performed using a k-NN, a linear and quadratic classifier. In particular, using the 1-NN classifier, good results were achieved using a two-step classification algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new Feature Matching approach for Real Time Image Stabilization based on features provided by a Multi-Scale Top Hat Filter. The idea is to calculate bright and dark regions of different sizes in two images, calculate 'image' and 'match history' specific characteristics for each region, match the regions from the two images based on their characteristics and then to estimate the x and y translations and rotation between the two images based on the region matches. Given the real time requirement and the performance characteristics of today's generation of Digital Signal Processors we implemented the Multi-Scale Top Hat Filter with a lower resolution and compensated for the precision lost by using the Multi-Window Correlation algorithm. The successfully matched regions from the Multi- Scale Top Hat Filter provide exactly the kind of data to the Multi-Window Correlation algorithm which tend to be critical for its application. Automatic parameter adaptation and the Top Hat Filter specific nature of the detected regions provide a robust and adaptive algorithm dealing well with different scenes and environmental changes. We present the details of this approach including its real time implementation and its role in our Automatic Target Detection and Tracking system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Motion is one of the most important cues for the acquisition of targets. Given the real time requirement, there are two basic approaches for the detection of motion: (1) The difference between the actual image and an adaptive background image and (2) The difference between the actual image and preceding images. The first approach provides a precise and robust segmentation of the moving targets from the background and works well with low target/background contrast and clutter. However, when the sight is moving the background changes too quickly for a robust calculation of a valid background image. The second approach adapts well to the actual environment, but provides only an inaccurate indication of the moving edges of the targets and is quite sensitive to target/background contrast and clutter. This work presents a new approach which combines in a nonlinear manner the short time and the medium time image differences between the actual image and preceding images. It provides a precise and robust segmentation of the moving targets from the background and a good adaptation to the actual environment. In addition, it works even better than the first approach for low target/background contrast and clutter. We present the details of this approach including its real time implementation and its role in our Automatic Target Detection and Tracking system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a real-time detection algorithm for contrasted targets (bright and dark) of different sizes in infrared imagery. We developed a real-time multi-scale top hat filters which allows us to detect very low contrast bright and dark regions of five different size categories simultaneously: 2x2 - 4x4 pixels, 4x4 - 8x8 pixels, 8x8 - 16x16 pixels, 16x16 - 32x32 pixels and 32x32 - 64x64 pixels. The detected regions are matched over time and filtered using the characteristics of expected targets, like contrast, size and their vehicle dynamics. We present the details of this approach including its real-time implementation and its role in our Automated Target Detection and Tracking system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multisensor Fusion, Tracking, and Resource Management
A recursive multisensor association algorithm has been developed based on fuzzy logic. It associates data from the same target for multiple sensor types. The algorithm provides an estimate of the number of targets present and reduced noise estimates of the quantities being measured. Uncertain information from many sources including other algorithms can be easily incorporated. A comparison of the algorithm to a more conventional Bayesian association algorithm is provided. The algorithm is applied to a multitarget environment for simulated data. The data from both the ESM and radar systems is noisy and the ESM data is intermittent. The radar data has probability of detection less than unity. The effects on parameter estimation, determination of the number of targets, and multisensor data association is examined for the case of a large number of targets closely spaced in the RF-PRI plane. When a sliding window is introduced to minimize memory and CPU requirements the algorithm is shown to lose little in performance, while gaining significantly in speed. The algorithm's CPU usage, computational complexity, and real-time implementation requirements are examined. Finally, the algorithm will be considered as an association algorithm for a multifunction antenna that makes use of fuzzy logic for resource allocation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Deployable Autonomous Distributed System is an ocean surveillance system that contains a field of sensor nodes. Each sensor node provides target detections to a master node in the field for fusion by a Multiple Hypothesis Tracker Correlator (MHTC). The overall performance of a fusion engine depends upon the set of parameters that are used by the Multiple Hypothesis Tracker. Although a static set of parameters may work well over a wide range of scenarios, they may not lead to optimal performance in all cases. This paper addresses Level 4 fusion to improve performance of the data fusion system at the master node by using a fuzzy logic controller to adaptively tune the parameters. By using a set of linguistic rule based fuzzy logic algorithms, the tuning parameters of the MHTC are modified. A set of metrics are used to determine the added worth of the fuzzy logic controller.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new methodology for quantifying the relative contribution of specific sensor actions to a set of mission goals is presented. The mission goals are treated as a set, and an ordering relationship is applied to it leading to a partially ordered set which can be represented as a lattice. At each layer in the lattice, each goal's value is computed as the sum of the (higher) goals in which it is included and its value is apportioned among the (lower) goals which it includes. A system designer is forced to make a zero-sum apportionment of each goal's value among those goals which it includes. The net result of this methodology is a quantifiable measure of the contributing value of each real type of sensor action to the system of goals, leading to more effective allocation of resources. While applied here to sensor scheduling, the method has applications to other decision making processes as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Standard vehicle routing problems have been studied for decades in fields such as transportation, manufacturing, and commodity distribution. In this work, we proposed a variation of these problems that arise in routing Unmanned Aerial Vehicles (UAV's) in the presence of terrain obscuration. Specifically, the UAV must visit a location from which the object on the ground in mountainous regions can be viewed without actually flying over the object. Numerical results are presented for near optimal and real time algorithms which have been developed using Lagrangian relaxation techniques. Directions for future work that include priorities, time windows, and routing multiple UAV's with periodic and dynamic changes in the object locations are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A central problem in multitarget, multisensor, and multiplatform tracking remains that of data association. Lagrangian relaxation methods have shown themselves to yield near optimal answers in real-time. The necessary improvement in the quality of these solutions warrants a continuing interest in these methods. These problems are NP-hard; the only known methods for solving them optimally are enumerative in nature with branch-and-bound being most efficient. Thus, the development of methods less than a full branch-and-bound are needed for improving the quality. Such methods as K-best, local search, and randomized search have been proposed to improve the quality of the relaxation solution. Here, a partial branch-and-bound technique along with adequate branching and ordering rules are developed. Lagrangian relaxation is used as a branching method and as a method to calculate the lower bound for subproblems. The result shows that the branch-and-bound framework greatly improves the resolution quality of the Lagrangian relaxation algorithm and yields better multiple solutions in less time than relaxation alone.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The S-dimensional (S-D) assignment algorithm is a recently- favored approach to multitarget tracking in which the data association is formulated as a generalized multidimensional matching problem, and solved by a Lagrangian (dual) relaxation approach. The Probabilistic Multiple Hypothesis Tracking algorithm is a relatively new method, which uses the EM algorithm and a modified probabilistic model to develop a `soft' association tracker. In this paper, we implement the two algorithms (S = 3, in the S-D assignment algorithm) in the multitarget tracking problem, presented with false alarms and imperfect target detection. Simulation results for various scenarios are presented and the performances of the two algorithms are compared in terms of computational time and percentage of lost tracks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Track fusion in a decentralized battlespace network is examined. The battlespace comprises a mixed blue force (group carrier, AWACS, two fighter aircraft) and a red fighter aircraft. The network connectivity of the blue platforms evolves through four phases as the battlespace dynamics unfold. Specifically, it switches abruptly between fully connectivity and tree connectivity. The platforms are not permitted to have any global knowledge of this network topology; they only know about their local connections. This work consider the impact of unpredictable network topology changes on the performance of three decentralized track fusion algorithms. The first algorithm, referred to here as the Local Information Distribution (LID) algorithm, is optimal for fully connected networks. The second algorithm, known as the Global Information Distribution (GID) algorithm, is optimal for tree networks but otherwise inconsistent due to correlations induced by multiple combinations of the same item of information. The third algorithm, Covariance Intersection (CI), is always sub-optimal but is proven to be consistent in the presense of unknown correlations. Results are obtained, at each simulation time-step, for the accuracy of the fused red track at each blue platform. It is shown, for the battlespace scenario under investigation, that CI can sometimes outperform LID. This suggests that the platforms in a decentralized network, subject to unpredictable topology changes, should execute CI and LID in parallel for maximum overall tracking performance. However, this might impose prohibitive constraints on the amount of processing and network traffic. In practice, therefore, a trade-off solution might have to be found.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A primary concern of multiplatform data fusion is assessing the quality and utility of data shared among platforms. Constraints such as platform and sensor capability and task load necessitate development of an on-line system that computes a metric to determine which other platform can provide the best data for processing. To determine data quality, we are implementing an approach based on entropy coupled with intelligent agents.
To determine data quality, we are implementing an approach based on entropy coupled with intelligent agents. Entropy measures quality of processed information such as localization, classification, and ambiguity in measurement-to-track association. Lower entropy scores imply less uncertainty about a particular target. When new information is provided, we compuete the level of improvement a particular track obtains from one measurement to another. The measure permits us to evaluate the utility of the new information. We couple entropy with intelligent agents that provide two main data gathering functions: estimation of another platform's performance and evaluation of the new measurement data's quality. Both functions result from the entropy metric. The intelligent agent on a platform makes an estimate of another platform's measurement and provides it to its own fusion system, which can then incorporate it, for a particular target. A resulting entropy measure is then calculated and returned to its own agent. From this metric, the agent determines a perceived value of the offboard platform's measurement. If the value is satisfactory, the agent requests the measurement from the other platform, usually by interacting with the other platform's agent. Once the actual measurement is received, again entropy is computed and the agent assesses its estimation process and refines it accordingly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A pre-detection nonlinear operator of the `soft decision type' is proposed for the fusion of IR and radar data. The proposed operator fuses data prior to information extraction, with the advantage that no information is considered redundant prior to fusion. This we define as the `Derived Radar and Infra-Red Voxel Energy' or the DRIVE operator. The signals arise from independent stochastic processes, and are nominally uncorrelated. The behavior of the expectation of the fusion response describes the data with respect to the application of detection of the target. The above hypothesis is being investigated through fully controlled experiments where data are simultaneously collected with the Danish Defense Research Establishment Modular Radar System high range resolution radar and a commercially available IR camera. Focus is on the region where data are marginal; that is, where neither radar data nor IR data alone gives an extractable signal. The theoretical and experimental results are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the sensor to shooter information fusion for rapid targeting program. The objective of this program is to design, develop, test, and demonstrate the fusion of intelligence, surveillance, and reconnaissance data with on-board sensor data. This decentralized information fusion system will take advantage of both on- board tactical platform and off-board sensor data to generate a high performance identification capability. The algorithm development will address Automatic Target Recognition, ground target tracking, target cueing, and registration of imagery residing on both ground state (off- board) and tactical aircraft (on-board) systems. Analysis of data link and processing requirements/capabilities will be performed to determine an on-board and off-board fusion architecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Voting techniques for high level sensor/data fusion are explored here, with examples from character recognition and target value analysis. High degrees of reliability can be achieved, and the method can be applied to various kinds of data. The only requirement is that a ranking of the targets be extracted from the sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work presents new results for state estimation based on noisy observations suffering from two different types of uncertainties. The first uncertainty is a stochastic process with given statistics. The second uncertainty is only known to be bounded, the exact underlying statistics are unknown. State estimation tasks of this kind typically arise in target localization, navigation, and sensor data fusion. A new estimator has been developed, that combines set theoretic and stochastic estimation in a rigorous manner. The estimator is efficient and, hence, well-suited for practical applications. It provides a continuous transition between the two classical estimation concepts, because it converges to a set theoretic estimator, when the stochastic error goes to zero, and to a Kalman filter, when the bounded error vanishes. In the mixed noise case, the new estimator provides solution sets that are uncertain in a statistical sense.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an approach to perform automatic target detection of small targets from coregistered visual, thermal, and range images, using five features of value for target discrimination: Brightness, Texture, Temperature, Surface Planarity, and Height. For each, we proposed a set of operations to extract targets from the images, using inherent target properties that differentiate them from clutter. Each of the target extractors yields a `Target Measure' image, based on a specific feature. These, when combined appropriately, yield better results than those obtained by individual, single image detectors. Two methods are presented to perform information fusion on the target measure images: Binary Combination and Fuzzy Combination. Experimental results using both combination methods on synthetic and real imagery are given with very satisfactory results. A morphological operation called `erosion of strength n' is introduced and utilized as a powerful tool for removal of spurious information in binary images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a methodology for object recognition in complex scenes by learning multiple feature object representation in second generation Forward Looking InfraRed (FLIR) images. A hierarchical recognition framework is developed which solves the recognition task by performing classification using decisions at the lower levels and the input features. The system uses new algorithms for detection and segmentation of objects and a Bayesian formulation for combining multiple object features for improved discrimination. Experimental results on a large database of FLIR images is presented to validate the robustness of the system, and its applicability to FLIR imagery obtained from real scenes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern capture and storage of digital information have given rise to tremendous repositories of data, but the ability to efficiently and effectively use this data has lagged. A prominent example is the generation of video imagery by Unmanned Aerial Vehicles. This imagery currently must be analyzed by a human observer either in real-time or playback mode (dynamic or frame-by-frame). These manpower-intensive activities become increasingly unpalatable as personnel resources shrink, so an automated system is desired that will assist in transforming the large data sets into vastly smaller sets that still retain the pertinent information. We propose a computational vision model based on physiological and behavioral characteristics of the human visual system to perform the registration of multiple video frames, followed by a fusion process that produces a video mosaic preserving the `important' features (those the human observer would deem important) from each individual frame.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider a problem in which exactly one of the n+1 distinct signals {S0, ...,Sn}, say S0, may be present in a noisy environment and the presence or absence of S0 needs to be determined. The performance is given by the false alarm probabilities Pfi, i=1, ...,n, where Pfi is the probability that Si is declared as S0, and the detection probability Pd which is the probability that S0 is correctly recognized. It is required that Pd be maximized while Pfi≤ci, i=1,...,n, where c1, ...,cn are prescribed non-negative constants. The solution is an extended Neyman-Pearson test in which p0(x) is tested against (w1p1(x)+ ...+wnpn(x)) where pi(x) is the probability density function of observation X when Si occurs. We propose two methods to obtain w1, ..., wn for any given c1, ..., cn. For certain simple cases, analytical or graphical solution is used. For the general case, we propose a search algorithm on w1, ..., wn which provides a sequence of extended Neyman-Pearson tests that converge to the optical solution. Numerical examples are given to illustrate these methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Low detector signals, acoustic coupling and speckle noise are challenging problems in the laser Doppler-based acoustic-to-seismic detection of land mines. Scanning insonified patches over buried targets with the spatial resolution required in minefield applications demands processing a large quantity of detection data. To achieve an efficient and robust detection, acoustic-to-seismic coupling on the ground is considered as a system under test (SUT), number-theoretical maximum-length sequences (M-sequences) have been applied as the acoustic excitation to the SUT. Exploiting their excellent auto-correlation property and high noise immunity due to high signal energy and noise suppression, a fast algorithm (so-called fast M-sequence transform) is implemented in the cross-correlation procedure to extract the impulse response of the SUT directly from laser Doppler vibrometer signals with a high signal-to-noise ratio. The advantage of directly obtaining impulse responses is also exploited in featuring a time windowing technique to isolate the acoustic coupling into the laser Doppler-based system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we evaluate two lossy compression methods, JPEG and SPIHT, using a metric computation system which evaluates the distortion effects of lossy compression as it affects the task of target segmentation with morphological filtering. Using Forward Looking Infrared Radar images to develop the approach, our goal is to contrast the results of the identical tasks with and without compression using traditional metrics, local/target area traditional metrics, and binary metrics applied to the component representing the selected target mask. Thus, our metrics are specifically targeted to measure the degree of invariance of the processing to the presence of an initial compression- decompression step. The two segmentation methods used are the fuzzy c-means and the median cut.
The results indicate that even though SPIHT is better than JPEG, this is not always the case in terms of the binary metrics. In addition, the fuzzy c-means segmentation method is better than the median cut in most cases. Another interesting effect observed is that small changes in traditional metrics can sometimes lead to a drastic change in the task-specific metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A proposed optical multiresolution preprocessor allows faster reading and processing of video data than a conventional camera when only a small part of an image is of interest. The processor allows an image to be mapped at low resolution to a sensor array to determine the region of interest. An optical mask is set to select only the region of interest. This region is now read to another sensor array. A diffractive optical element, DOE, is designed that maps both the full image and the region to the sensor arrays. The DOE is constructed from overlapping and translated zone plates that focus light off axis to the appropriate sensor elements. The equations are derived for the zone plates and for the field intensities at the output. In one configuration for handling white light, an optically addressable liquid crystal light valve converts the input image to a single wavelength for the DOE. A second configuration is discussed briefly in which the DOE structure is converted to a three color 3D one. The limitations of the approaches are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The theoretic substantiation of a locally topological method for defining a minimum attractor embedding dimension on the basis of state-space method of a dynamic system description is supposed. The numerical modeling of various types of discrete dynamic systems on the computer is carried out with the purpose of verification of theoretical principles, which are the underlying ones for a minimum dimension computation method of such system attractor embedding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a new method for an adaptive time- delay estimator. It improves its performance in estimating the difference in arrival time of a band-limited random signal received by two spatially separated sensors in an environment where the signal and noise power are time varying. The proposed method has an advantage in that the variance of time-delay can be assessed at any time. The valuable conclusion and convergence characteristics are obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the present paper method of nonlinear system identification are proposed for constructing adaptive nonlinear model of auditory filter. This paper develops a effective digital identification method of the nonlinear system modeling filtering processes in distal auditory system for solving the problem of variability in speech signal recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this paper is to consider autoregressive hidden Markov models for the isolated words recognition task. The training and recognition algorithms for autoregressive hidden Markov models were developed and investigated. The speech feature vector was designed based on the perceptual psychoacoustical principles and arithmetic Fourier transform. The speech data base consisted from 200 belarussian words was created and used for experiments. The developed autoregressive hidden Markov model and introduced speech character vector provide a very high recognition performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A software application for correlating and fusion information products from multiple dissimilar sensors is presented. The Tactical Multi Sensor Fusion (TMSF) system is a C++ object oriented application implementing data correlation and fusion algorithms which provide decision aids for identifying, locating, and determining the status of processing equipment within suspected weapons of mass destruction sites. The TMSF system also provides valuable information for assessing weapon delivery accuracy and effectiveness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents one effective method for ship recognition in the ship lock. The outdoors environment is very complexity in which there are shade, waves and speckles caused by the sunlight and wind or motion of ships. The accuracy of recognition is depended on the accuracy of disturb area detection. It analyzes their characters on gray level and structure, proposes a new method to form a special histogram of only those pixels besides the boundary. This histogram is fit for small object segment and also large. At the end, the features for recognition based on statistic are presented. The long time running in the temporary ship lock of Three Gorges Project proves the error rate of judging is less than three thousandth just using the statistic features and less than one over ten thousands cooperating with the others.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we investigate on a ranging system based on the images recorded by a single passive camera. To estimate the distance from an object it is necessary to know its dimensions, so the observed target must be identified. The target is assumed to be a translated, rotated and scaled version of the corresponding reference image included in a database. The distance to the object is estimated by comparing the extracted target with its reference image taken at a known distance. We consider two different Automatic Target Recognition techniques and compare their performances on a set of simulated data. Namely we evaluate the probability of correct recognition and the relative error in distance estimation. Finally, we discuss the results obtained by processing an experimental sequence of images recorded by an IR camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In pulsed radar and sonar systems, the target return has unknown phase, and also unknown frequency, if the target is moving. In unspecified non-Gaussian noise, optimal detectors are unavailable, and current single sensor, noncoherent techniques rely on the frequency being known. When frequency estimates are substituted for the unknown frequency in these latter detectors, they fail completely because their thresholds do not take into account the uncertainty of the frequency estimator. In this contribution, we propose a detector based on the peak of the finite Fourier transform and the bootstrap. The bootstrap is a statistical method for estimating the sampling distribution of a statistic from the sample data itself. In this way, modeling assumptions about the noise and signal are relaxed. This advantage of using the bootstrap is seen from theoretical results and simulations presented. We demonstrate that a constant false alarm rate is achievable even for heavily skewed interference, while detection rates of 99% are possible for data sizes as low as 100 samples and -5 dB signal-to- noise ratio. Some asymptotic properties of the detector are given. In the simulations, we also present a comparison of the detector with the classical detectors based on least squares regression and based on uniform random phase. It is seen that our proposed method compares favorably with these methods in that, when the frequency is known, the proposed method is only slightly less powerful than the classical detectors for Gaussian noise. It continues to do well for heavy-tailed and skewed non-Gaussian noise such as t- distributed and Gaussian mixture noise. For unknown frequency, it is still able to obtain a high detection rate when the classical detectors fail completely.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Function approximation is a very important task in environments where the computation has to be based on extracting information from data samples in real world processes. So, the development of new mathematical model is a very important activity to guarantee the evolution of the function approximation area. In this sense, we will present the Polynomials Powers of Sigmoid as a linear neural network. In this paper, we will introduce one series of practical results for the Polynomials Powers of Sigmoid, where we will show some advantages of the use of the powers of sigmoid functions in relationship the traditional MLP- Backpropagation and Polynomials in functions approximation problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.