KEYWORDS: Mirrors, Algorithms, Diffraction, Optimization (mathematics), Analog electronics, Chemical elements, Detection and tracking algorithms, Near field diffraction, Associative arrays, Reconstruction algorithms
We investigated the problem of complex scalar monochromatic light field synthesis with a deflectable mirror array device (DMAD). First, an analysis of the diffraction field produced by the device upon certain configurations is given assuming Fresnel diffraction. Specifically, we derived expressions for the diffraction field given the parameters of the illumination wave and the tilt angles of the mirrors. The results of the analysis are used in later stages of the work to compute the samples of light fields produced by mirrors at certain points in space. Second, the light field synthesis problem is formulated as a linear constrained optimization problem assuming that mirrors of the DMAD can be tilted among a finite number of different tilt angles. The formulation is initially developed in the analog domain. Transformation to digital domain is carried out assuming that desired fields are originating from spatially bounded objects. In particular, we arrived at a Dp = b type of problem with
some constraints on p, where D and b are known, and p will be solved for and will determine the configuration
of the device. This final form is directly amenable to digital processing. Finally, we adapt and apply matching pursuit and simulated annealing algorithms to this digital problem. Simulations are carried out to illustrate the results. Simulated annealing performs successful synthesis when supplied with good initial conditions. However, we should come up with systematic approaches for providing good initial conditions to the algorithm. We do not have an appropriate strategy currently. Our results also suggest that simulated annealing achieves better results than MP. However, if only a part of the mirrors can be used, and the rest can be turned off, the performance of MP is acceptable and it turns out to be stable for different types of fields.
Atomic decompositions are lower-cost alternatives to the principal component analysis (PCA) in tasks where sparse signal representation is required. In pattern classifications tasks, e.g. face detection, a careful selection of atoms is needed in order to ensure an optimal and fast-operating decomposition to be used in the feature extraction stage. In this contribution, adaptive boosting is used as criterion for selecting optimal atoms as features in frontal face detection
system. The goal is to speed up the learning process by a proper combination of a dictionary of atoms and a weak learner. Dictionaries of anisotropic wavelet packets are used where the total number of atoms is still feasible for large-size images. In the adaptive boosting algorithm a Bayesian classifier is used as a weak learner instead of a simple threshold, thus ensuring a higher accuracy for slightly increased computational cost during the detection stage. The
experimental results obtained for four different dictionaries are quite promising based on the good localization properties of the anisotropic wavelet packet functions.
The best basis paradigm is a lower cost alternative to the principal component analysis (PCA) for feature extraction in pattern recognition applications. Its main idea is to build a collection of bases and search for the best one in terms of e.g. best class separation. Recently, fast best basis search algorithms have been generalized for anisotropic wavelet packet bases. Anisotropy is preferable for 2-D objects since it helps capturing local image features in a better way. In this contribution, the best anisotropic basis search framework is applied to the problem of recognition of characters captured from gray-scale pictures of car license plates. The goals are to simplify the classifier and to avoid a preliminary binarization stage by extracting features directly from the gray-scale images. The collection of bases is formed by anisotropic wavelet packets. The search algorithm seeks for a basis providing the lowest-dimensional data representation preserving the inter-class separability for given training data set, measured as Euclidean distance between class centroids. The relationship between the feature extractor and classifier complexity is clarified by training neural networks for different local bases. The proposed methodology shows its superiority to PCA as it yields equal and even lower classification error rate with considerably reduced computational costs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.