FASTER BINARY EMBEDDINGS FOR PRESERVING EUCLIDEAN DISTANCES

Abstract

We propose a fast, distance-preserving, binary embedding algorithm to transform a high-dimensional dataset T ⊆ R n into binary sequences in the cube {±1} m . When T consists of well-spread (i.e., non-sparse) vectors, our embedding method applies a stable noise-shaping quantization scheme to Ax where A ∈ R m×n is a sparse Gaussian random matrix. This contrasts with most binary embedding methods, which usually use x → sign(Ax) for the embedding. Moreover, we show that Euclidean distances among the elements of T are approximated by the 1 norm on the images of {±1} m under a fast linear transformation. This again contrasts with standard methods, where the Hamming distance is used instead. Our method is both fast and memory efficient, with time complexity O(m) and space complexity O(m) on well-spread data. When the data is not well-spread, we show that the approach still works provided that data is transformed via a Walsh-Hadamard matrix, but now the cost is O(n log n) per data point. Further, we prove that the method is accurate and its associated error is comparable to that of a continuous valued Johnson-Lindenstrauss embedding plus a quantization error that admits a polynomial decay as the embedding dimension m increases. Thus the length of the binary codes required to achieve a desired accuracy is quite small, and we show it can even be compressed further without compromising the accuracy. To illustrate our results, we test the proposed method on natural images and show that it achieves strong performance.

1. INTRODUCTION

Analyzing large data sets of high-dimensional raw data is usually computationally demanding and memory intensive. As a result, it is often necessary as a preprocessing step to transform data into a lower-dimensional space while approximately preserving important geometric properties, such as pairwise 2 distances. As a critical result in dimensionality reduction, the Johnson-Lindenstrauss (JL) lemma (Johnson & Lindenstrauss, 1984) guarantees that every finite set T ⊆ R n can be (linearly) mapped to a m = O( -2 log(|T |)) dimensional space in such a way that all pairwise distances are preserved up to an -Lipschitz distortion. Additionally, there are many significant results to speed up the JL transform by introducing fast embeddings, e.g. (Ailon & Chazelle, 2009; Ailon & Liberty, 2013; Krahmer & Ward, 2011; Nelson et al., 2014) , or by using sparse matrices (Kane & Nelson, 2014; 2010; Clarkson & Woodruff, 2017) . Such fast embeddings can usually be computed in O(n log n) versus the O(mn) time complexity of JL transforms that rely on unstructured dense matrices.

1.1. RELATED WORK

To further reduce memory requirements, progress has been made in nonlinearly embedding highdimensional sets T ⊆ R n to the binary cube {-1, 1} m with m n, a process known as binary embedding. Provided that d 1 (•, •) is a metric on R n , a distace preserving binary embedding is a map

