Home

About

Advanced Search

Browse by Discipline

Scientific Societies

E-print Alerts

Add E-prints

E-print Network
FAQHELPSITE MAPCONTACT US


  Advanced Search  

 
Random Matrices in Data Analysis Dimitris Achlioptas
 

Summary: Random Matrices in Data Analysis
Dimitris Achlioptas
Microsoft Research, Redmond, WA 98052, USA
optas@microsoft.com
Abstract. We show how carefully crafted random matrices can achieve
distance-preserving dimensionality reduction, accelerate spectral compu-
tations, and reduce the sample complexity of certain kernel methods.
1 Introduction
Given a collection of n data points (vectors) in high-dimensional Euclidean space
it is natural to ask whether they can be projected into a lower dimensional
Euclidean space without suffering great distortion. Two particularly interesting
classes of projections are: i) projections that tend to preserve the interpoint
distances, and ii) projections that maximize the average projected vector length.
In the last few years, distance-preserving projections have had great impact in
theoretical computer science where they have been useful in a variety of algorith-
mic settings, such as approximate nearest neighbor search, clustering, learning
mixtures of distributions, and computing statistics of streamed data.
The general idea is that by providing a low dimensional representation of the
data, distance-preserving embeddings dramatically speed up algorithms whose
run-time depends exponentially in the dimension of the working space. At the

  

Source: Achlioptas, Dimitris - Department of Computer Engineering, University of California at Santa Cruz

 

Collections: Computer Technologies and Information Sciences