Binary Embedding: Fundamental Limits and Fast Algorithm

19 Feb 2015  ·  Xinyang Yi, Constantine Caramanis, Eric Price ·

Binary embedding is a nonlinear dimension reduction methodology where high dimensional data are embedded into the Hamming cube while preserving the structure of the original space. Specifically, for an arbitrary $N$ distinct points in $\mathbb{S}^{p-1}$, our goal is to encode each point using $m$-dimensional binary strings such that we can reconstruct their geodesic distance up to $\delta$ uniform distortion. Existing binary embedding algorithms either lack theoretical guarantees or suffer from running time $O\big(mp\big)$. We make three contributions: (1) we establish a lower bound that shows any binary embedding oblivious to the set of points requires $m = \Omega(\frac{1}{\delta^2}\log{N})$ bits and a similar lower bound for non-oblivious embeddings into Hamming distance; (2) we propose a novel fast binary embedding algorithm with provably optimal bit complexity $m = O\big(\frac{1}{\delta^2}\log{N}\big)$ and near linear running time $O(p \log p)$ whenever $\log N \ll \delta \sqrt{p}$, with a slightly worse running time for larger $\log N$; (3) we also provide an analytic result about embedding a general set of points $K \subseteq \mathbb{S}^{p-1}$ with even infinite size. Our theoretical findings are supported through experiments on both synthetic and real data sets.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Data Structures and Algorithms Information Theory Information Theory

Datasets


  Add Datasets introduced or used in this paper