Manifold learning, also called nonlinear dimensionality reduction, affords a way to understand and visualize the structure
of nonlinear hyperspectral datasets. These methods use graphs to represent the manifold topology, and use metrics like
geodesic distance, allowing embedding higher dimension objects into lower dimension. However the complexities of
some manifold learning algorithms are O(N3), therefore they are very slow (high computational algorithms). In this
paper we present a CUDA-based parallel implementation of the three most popular manifold learning algorithms like
Isomap, Locally linear embedding, and Laplacian eigenmaps, using CUDA multi-thread model. The result of this
dimensionality reduction was employed in segmentation using active contours as an application of these reduced
hyperspectral images. The manifold learning algorithms were implemented on a 64-bit workstation equipped with a
quad-core Intel® Xeon with 12 GB RAM and two NVIDIA Tesla C1060 GPU cards. Manifold learning outperforms
significantly and achieve up to 26x speedup. It also shows good scalability where varying the size of the dataset and the
number of K nearest neighbors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.