Date of Original Version
© 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Abstract or Description
We propose a new method for linear dimensionality reduction of manifold-modeled data. Given a training set X of Q points belonging to a manifold M ⊂ ℝN, we construct a linear operator P : ℝN → ℝM that approximately preserves the norms of all (2Q) pairwise difference vectors (or secants) of X. We design the matrix P via a trace-norm minimization that can be efficiently solved as a semi-definite program (SDP). When X comprises a sufficiently dense sampling of M, we prove that the optimal matrix P preserves all pairs of secants over M. We numerically demonstrate the considerable gains using our SDP-based approach over existing linear dimensionality reduction methods, such as principal components analysis (PCA) and random projections.
Proceedings of the IEEE Statistical Signal Processing Workshop (SSP), 2012, 728-731.