Presentation + Paper
14 May 2019 Comparing classifiers that exploit random subspaces
Jamie Gantert, David Gray, Don Hulsey, Donald Waagen
Author Affiliations +
Abstract
Many current classification models, such as Random Kitchen Sinks and Extreme Learning Machines (ELM), minimize the need for expert-defined features by transforming the measurement spaces into a set of "features" via random functions or projections. Alternatively, Random Forests exploit random subspaces by limiting tree partitions (i.e. nodes of the tree) to be selected from randomly generated subsets of features. For a synthetic aperture RADAR classification task, and given two orthonormal measurement representations (spatial and multi-scale Haar wavelet), this work compares and contrasts ELM and Random Forest classifier performance as a function of (a) input measurement representation, (b) classifier complexity, and (c) measurement domain mismatch. For the ELM classifier, we also compare two random projection encodings.
Conference Presentation
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Jamie Gantert, David Gray, Don Hulsey, and Donald Waagen "Comparing classifiers that exploit random subspaces", Proc. SPIE 10988, Automatic Target Recognition XXIX, 109880G (14 May 2019); https://doi.org/10.1117/12.2520184
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Wavelets

Data modeling

Neurons

Synthetic aperture radar

Matrices

Stochastic processes

Computer programming

Back to Top