PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Neural networks provide several distinctive features that are of substantial relevance to control technology. These include: accurate approximations of nonlinear functions and nonlinear dynamical systems; compact, efficient implementations; and data-intensive rather than expertise-intensive model and controller development. The benefits of neural networks for control applications are now being realized in numerous domains. We discuss several ways neural networks can be used for modeling and control. For modeling applications, neural networks have been trained to realize `black-box' forward and inverse process models as well as parametric models. Neural network controllers can be developed by emulating existing controllers, by model-free optimization, and by model-based optimization. Examples from deployed applications, available products, and the technical literature illustrate these concepts. We conclude by discussing some important topics for future research: dynamic neural networks, incremental learning, and application-specific network design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The results obtained in the realization of adaptive systems for pattern processing are presented. A theoretical analysis of the noise influence and the adaptive filter action in the coherent optical systems is made. Some new filtering blocks located in the Fourier domain in the complex optical systems are used. A dynamic nonlinear element has been applied working in our compact mirror device.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we study pattern recognition using stochastic cellular automata (SCA). A learning system can be defined by three rules: the encoding rule, the rule of internal change, and the quantization rule. In our system, the data encoding is to store an image in a stable distribution of a SCA. Given an input image f (epsilon) F, one can find a SCA t (epsilon) T such that the equilibrium distribution of this SCA is the given image f. Therefore, the input image, f, is encoded into a specification of a SCA, t. This mapping from F (image space) to T (parameter space of SCA) defines SCA transformation. SCA transformation encodes an input image into a relatively small vector which catches the characteristics of the input vector. The internal space T is the parameter space of SCA. The internal change rule of our system uses local minima algorithm to encode the input data. The output data of the encoding stage is a specification of a stochastic dynamical system. The quantization rule divides the internal data space T by sample data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Psychocybernetic systems engineering design conceptualization is mimicking the evolutionary path of habitable environmental design and the professional practice of building architecture, construction, and facilities management. In pursuing better ways to design cellular automata and qualification classifiers in a design process, we have found surprising success in exploring certain more esoteric approaches, e.g., the vision of interdisciplinary artistic discovery in and around creative problem solving. Our evaluation in research into vision and hybrid sensory systems associated with environmental design and human factors has led us to discover very specific connections between the human spirit and quality design. We would like to share those very qualitative and quantitative parameters of engineering design, particularly as it relates to multi-faceted and future oriented design practice. Discussion covers areas of case- based techniques of cognitive ergonomics, natural modeling sources, and an open architectural process of means/goal satisfaction, qualified by natural repetition, gradation, rhythm, contrast, balance, and integrity of process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The number of nodes in a hidden layer of a feedforward layered network reflects an optimality condition of the network in coding a function. It also affects the computation time and the ability of the network to generalize. When an arbitrary number of hidden nodes is used in designing the network, redundancy of hidden nodes often can be seen. In this paper, a method of reducing hidden nodes is proposed on the condition that a reduced network maintains the performances of the original network with accepted level of tolerance. This method can be applied for estimation of performances of a network with fewer hidden nodes. The estimated performances are the lower bound of the actual performances of the network. Experiments were performed on the Fisher's IRIS data, a set of SONAR data, and the XOR data for classification. Their results suggest that sufficient number of hidden nodes, fewer than the original number, can be estimated by the present method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper state of the art of the fuzzy logic based visual object recognition systems are discussed. One of the major objectives of the computer vision is to recognize various objects from images. Application of fuzzy logic facilitates the smooth translation of ambiguous image information into natural language which can be processed by fuzzy set theory. Various methods of fuzzy object recognition are presented. Some rule based techniques are mentioned to show the applicability of object recognition in consumer electronics. A brief summary of fusion with neural networks and genetic algorithms is given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new attribute-based learning algorithm, TS. Different from ID3, AQ11, and HCV in strategies, this algorithm operates in cycles of test and split. It uses those attribute values which occur only in positives but not in negatives to straightforwardly discriminate positives against negatives and chooses the attributes with least number of different values to split example sets. TS is natural, easy to implement, and low-order polynomial in time complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we point out an alternative basis for splitting a node of a decision tree. We use exactly the same framework of the tree generation as ID3 does, in order to be able to compare the results properly. The splitting of the sample set is also done locally at a tree node, without considering earlier decisions about the partition of the samples. Only one attribute is used to split the samples. We point out different splitting criteria. Contingency tables are a technique in nonparametric statistics to analyze categorical (symbolic) populations. Among other useful applications of contingency tables, dependence tests between rows and columns of the table can be performed. A sample set is inserted into a contingency table with classes as columns and all values of an attribute as rows. A variety of measurements of dependence can then be derived. Results in respect to the two most important qualities of decision trees, the error rate and tree complexity, are presented. For a set of selected benchmark examples the performance of ID3 and the contingency table approach are compared. It is shown that in many cases the contingency table method exhibits lower estimated error rates or has less nodes for the generated decision tree.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a formalized model of the evolution. This formalized model answers, what we call, the main question in the theory of evolution: How can systems with a (strong) purpose to exist (that is, systems that behave in a non random, or goal directed, way) result from a random (or non purposeful, non goal directed) developing, or learning, mechanism in a space-time effective way? The formalized model of the evolution that we present is based on a search process called generate-and-test search. It is the generator and the tester that composes the system developmental mechanism. The proposed framework deals explicitly with the development of systems independent of the particular nature of the systems themselves. The aim for this framework is to explicitly state the general principles for space-time effective system development. The performance of the proposed framework is analyzed in relation to the concept of search space difficulty. The difficulty of a search space is determined by the size and topology of the search space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper learning is considered to be the bootstrapping procedure where fragmented past experience of what to do when performing well is used for generation of new responses adding more information to the system about the environment. The gained knowledge is represented by a behavior probability density function which is decomposed into a number of normal distributions using a binary tree. This tree structure is built by storing highly reinforced stimuli-response combinations, decisions, and calculating their mean decision vector and covariance matrix. Thereafter the decision space is divided, through the mean vector, into two halves along the direction of maximal data variation. The mean vector and the covariance matrix are stored in the tree node and the procedure is repeated recursively for each of the two halves of the decision space forming a binary tree with mean vectors and covariance matrices in its nodes. The tree is the systems guide to response generation. Given a stimuli the system searches for decisions likely to give a high reinforcement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Genetic algorithms have recently been successfully applied to a wide range of problems. These often have search spaces that are very large, very complex, or both and are unsuitable for standard search algorithms such as hill climbing. The operators used in producing successive generations are usually those of crossover and mutation. The crossover operator is normally used in producing the majority of a generation while mutation acts as a background process. This paper examines the use of high amounts of mutation and gives the example of a genetic algorithm applied to the travelling salesman problem. This shows that high amounts of mutation need not ruin the algorithms convergence to optimal solutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Memory based techniques are becoming increasingly popular as learning methods. The k- nearest neighbor method has often been mentioned as one of the best learning methods but it has two basic drawbacks: the large storage demand and the often tedious search of the neighbors. In this paper, we present a method for approximating k-th nearest neighbor methods by using a hybrid kernel function and explicit data representation and thus reducing the amount of data used. This method will not use the correct nearest neighbors to a point but will use an average measure of them. Finding the real neighbors is not always needed for accurate classification but finding a few nearby points is sufficient for most cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of learning in adaptive vision systems is considered here. This study explores the spatial and deductive reasoning aspects of the system by case-based techniques, as it computes (1) the invisibility justification of a bird's eye, and (2) generalization and expansion of the gained knowledge in (1). The method presented manipulates symbolic entities, each with values, properties, and constrained relationships to other symbolic entities, to learn to create new knowledge from the existing knowledge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At present the problem of finding a quick and efficient way of representing an arbitrary shape as a set of contraction mappings (an iterated function system) is unresolved. The main problem that arises is the sheer size and complexity of the search space. This paper examines several constraints that can be placed on solutions, each of which has a low computational complexity. These constraints considerably reduce the search space in which the solutions exist and can be used to aid a variety of search algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have been developing edge relaxation and binary image enhancement systems using parameters derived from an ensemble of training images. We tabulate the frequencies of local structures in the training ensemble and reconstruct noisy/corrupted images so that they best match the local characteristics of the set of training images. This paper investigates how many such training images are required to generate a useful and consistent set of local neighborhood probabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss and present preliminary results about an algorithm which is trained to locate objects in images. The algorithm determines the parameters for a generalized Hough transform based on training images. Our training images consist of binary edge images from noisy imagery and identified points and boundaries for the objects being located in this imagery. The resulting generalized Hough transform will find objects of the same type at a wide variety of scales and any orientation present in the training data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our objective is to automatically generate an efficient edge detector given an ensemble of training images with known edge maps. This paper shows how to construct linear machines for edge detection from such an ensemble. Linear machines categorize data vectors into N categories by maximizing N - 1 linear functions (convolutions). The detector, that derives from artificial images with step edges, is significantly different from that derived from Canny's criteria. These differences suggest a new theory for edge detectors -- optimal operators that generate a fixed width response to edges. The preliminary suboptimal results from applying our linear machine are already comparable to that of the state of the art in edge detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper an approach to evolutionary learning based upon principles of cultural evolution is developed. In this dual-inheritance system, there is an evolving population of trait sequences as well as an associated belief space. The belief space is derived from the behavior of individuals and is used to actively constrain the traits acquired in future populations. Shifts in the representation of the belief space and the population are supported. The approach is used to solve several versions of the BOOLE problem; F6, F11, and F20. The results are compared with other approaches and the advantages of a dual inheritance approach using cultural algorithms is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A significant need exists for autonomous landing of an aircraft in adverse weather conditions, e.g., fog, haze, rain or snow. Such systems must provide the pilot the ability to view the runway and its surrounding with timely display information for each weather landing category. The most important requirements of such vision systems include a large field-of-view, a high update/frame rate, and high spatial resolution at low glazing angle in poor visibility conditions. To satisfy these requirements, Honeywell's System and Research Center has developed and demonstrated through flight tests the feasibility of a synthetic vision system for aircraft landing. This paper introduces the concept of the synthetic vision system, based on the Honeywell 35 GHz millimeter wave radar. It provides a detailed discussion on the adaptive image enhancement algorithms and their real-time implementation. The algorithms include beam sharpening and range adaptive contrast enhancement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a processing technique for computer assisted discriminant analysis in remote sensing applications. Local features extracted using Richardson's power law are investigated for their discriminatory power and a nonparametric classification scheme based on probability density function estimation is suggested. The capability to adjust false alarm rates and perform on-line learning is provided by this probabilistic approach. Case studies indicating the ability to discriminate between classes of objects in aerial images are presented. The technique can be used as a preprocessing aid in segmentation or in conjunction with morphological features in a more complete discrimination system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper studies the distribution of power law signatures for various texture types within a grayscale texture quilt. The fractal based features are extracted for the quilt using the covering method. Three features for the power law regression line are extracted. They are slope, y- intercept, and an F test statistic. The underlying distributions of these features are modeled using a nonparametric probability density estimation technique known as adaptive mixtures. These distribution models are then used to distinguish between the sixteen textures in the quilt.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The discrimination of texture features in an image has many important applications, from detection of man-made objects from a surrounding natural background to identification of cancerous from healthy tissue in x-ray imagery. The fractal structure in an image has been used with success to identify these features but requires unacceptable processing time if executed sequentially. We demonstrate a paradigm for applying massively parallel processing to the computation of fractal dimension of an image which will provide the necessary throughput demanded of real time applications. This model is evaluated on several architectures: Vectorizing Supercomputer, MIMD, and massively parallel SIMD computers. Performance comparisons are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Probabilistic neural networks (PNN) build internal density representations based on the kernel or Parzen estimator and use Bayesian decision theory in order to build up arbitrarily complex decision boundaries. As in the classical kernel estimator, the training is performed in a single pass of the data and asymptotic convergence is guaranteed. Asymptotic convergence, while necessary, says little about discrete sample estimation errors. These errors can be quite large. One problem that arises using either the kernel estimator or the PNN is when one or more of the densities being estimated has a discontinuity. This commonly leads to a pdfL(infinity ) expected error on the order of the amount of the discontinuity which can in turn lead to significant classification errors. By using the method of reflected kernels, we have developed a PNN model that does not suffer from this problem. The theory of reflected kernel PNNs, along with their relation to reflected kernel Parzen estimators, is presented along with finite sample examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Probabilistic neural networks (PNN) build internal density representations based on the kernel or Parzen estimator and use Bayesian decision theory in order to build up arbitrarily complex decision boundaries. As in the classical kernel estimator, the training is performed in a single pass of the data and asymptotic convergence is guaranteed. One important factor affecting convergence is the kernel width. Theory only provides an optimal width in the case of normally distributed data. This problem becomes acute in multivariate cases. In this paper we present an asymptotically optimal method of setting kernel widths for multivariate Gaussian kernels based on the theory of filtered kernel estimators and show how this can be realized as a filtered kernel PNN architecture. Performance comparisons are made with competing methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Linear phase maximally flat FIR Butterworth filter approximations are discussed and a new filter design method is introduced. This variable cutoff filter design method uses the cosine modulated versions of a prototype filter. The design procedure is simple and different variants of this procedure can be used to obtain close to optimum linear phase filters. Using this method, flexible time-varying filter banks with good reconstruction error are introduced. These types of oversampled filter banks have small magnitude error which can be easily controlled by the appropriate choice of modulation frequency. This error can be further decreased by magnitude equalization without increasing the computational complexity considerably. Two dimensional design examples are also given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.