Paper
29 October 1996 Feature spacing trajectory representation and processing for active vision
David P. Casasent, Michael A. Sipe
Author Affiliations +
Abstract
A new feature space trajectory (FST) description of 3D distorted views of an object is advanced for active vision applications. In an FST, different distorted object views are vertices in feature space. A new eigen-feature space and Fourier transform features are used. Vertices for different adjacent distorted views are connected by straight lines so that an FST is created as the viewpoint changes. Each different object is represented by a distinct FST. An object to be recognized is represented as a point in feature space; the closest FST denotes the class of the object, and the closest line segment on the FST indicates its pose. A new neural network is used to efficiently calculate distances. We discuss its uses in active vision. Apart from an initial estimate of object class and pose, the FST processor can specify where to move the sensor to: confirm class and pose, to grasp the object, or to focus on a specific object part for assembly or inspection. We advance initial remarks on the number of aspect views needed and which aspect views are needed to represent an object. We note the superiority of our eigenspace for discrimination, how it can provide shift invariance, and how the FST overcomes problems associated with other classifiers.
© (1996) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
David P. Casasent and Michael A. Sipe "Feature spacing trajectory representation and processing for active vision", Proc. SPIE 2904, Intelligent Robots and Computer Vision XV: Algorithms, Techniques,Active Vision, and Materials Handling, (29 October 1996); https://doi.org/10.1117/12.256297
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Error analysis

Sensors

Active vision

Fourier transforms

Feature extraction

Databases

Neurons

Back to Top