FASTER BINARY EMBEDDINGS FOR PRESERVING EUCLIDEAN DISTANCES

Abstract

We propose a fast, distance-preserving, binary embedding algorithm to transform a high-dimensional dataset T ⊆ R n into binary sequences in the cube {±1} m . When T consists of well-spread (i.e., non-sparse) vectors, our embedding method applies a stable noise-shaping quantization scheme to Ax where A ∈ R m×n is a sparse Gaussian random matrix. This contrasts with most binary embedding methods, which usually use x → sign(Ax) for the embedding. Moreover, we show that Euclidean distances among the elements of T are approximated by the 1 norm on the images of {±1} m under a fast linear transformation. This again contrasts with standard methods, where the Hamming distance is used instead. Our method is both fast and memory efficient, with time complexity O(m) and space complexity O(m) on well-spread data. When the data is not well-spread, we show that the approach still works provided that data is transformed via a Walsh-Hadamard matrix, but now the cost is O(n log n) per data point. Further, we prove that the method is accurate and its associated error is comparable to that of a continuous valued Johnson-Lindenstrauss embedding plus a quantization error that admits a polynomial decay as the embedding dimension m increases. Thus the length of the binary codes required to achieve a desired accuracy is quite small, and we show it can even be compressed further without compromising the accuracy. To illustrate our results, we test the proposed method on natural images and show that it achieves strong performance.

1. INTRODUCTION

Analyzing large data sets of high-dimensional raw data is usually computationally demanding and memory intensive. As a result, it is often necessary as a preprocessing step to transform data into a lower-dimensional space while approximately preserving important geometric properties, such as pairwise 2 distances. As a critical result in dimensionality reduction, the Johnson-Lindenstrauss (JL) lemma (Johnson & Lindenstrauss, 1984) guarantees that every finite set T ⊆ R n can be (linearly) mapped to a m = O( -2 log(|T |)) dimensional space in such a way that all pairwise distances are preserved up to an -Lipschitz distortion. Additionally, there are many significant results to speed up the JL transform by introducing fast embeddings, e.g. (Ailon & Chazelle, 2009; Ailon & Liberty, 2013; Krahmer & Ward, 2011; Nelson et al., 2014) , or by using sparse matrices (Kane & Nelson, 2014; 2010; Clarkson & Woodruff, 2017) . Such fast embeddings can usually be computed in O(n log n) versus the O(mn) time complexity of JL transforms that rely on unstructured dense matrices.

1.1. RELATED WORK

To further reduce memory requirements, progress has been made in nonlinearly embedding highdimensional sets T ⊆ R n to the binary cube {-1, 1} m with m n, a process known as binary embedding. Provided that d 1 (•, •) is a metric on R n , a distace preserving binary embedding is a map f : T → {-1, 1} m and a function d 2 (•, •) on {-1, 1} m × {-1, 1} m to approximate distances, i.e., |d 2 (f (x), f (y)) -d 1 (x, y)| ≤ α, for ∀x, y ∈ T . (1) The potential dimensionality reduction (m n) and 1-bit representation per dimension imply that storage space can be considerably reduced and downstream applications like learning and retrieval can happen directly using bitwise operations. Most existing nonlinear mappings f in (1) are generated using simple memory-less scalar quantization (MSQ). For example, given a set of unit vectors T ⊆ S n-1 with finite size |T |, consider the map q x := f (x) = sign(Gx) (2) where G ∈ R m×n is a standard Gaussian random matrix and sign(•) returns the element-wise sign of its argument. Let d 1 (x, y) = 1 π arccos( x -1 2 y -1 2 x, y ) be the normalized angular distance and d 2 (q x , q y ) = 1 2m q x -q y 1 be the normalized Hamming distance. Then, Yi et al. (2015) show that (1) holds with probability at least 1 -η if m α -2 log(|T |/η), so one can approximate geodesic distances with normalized Hamming distances. While this approach achieves optimal bit complexity (up to constants) (Yi et al., 2015) , it has been observed in practice that m is usually around O(n) to guarantee reasonable accuracy (Gong et al., 2013; Sánchez & Perronnin, 2011; Yu et al., 2014) . Much like linear JL embedding techniques admit fast counterparts, fast binary embedding algorithms have been developed to significantly reduce the runtime of binary embeddings (Gong et al., 2012b; Liu et al., 2011; Gong et al., 2012a; 2013; Li et al., 2011; Raginsky & Lazebnik, 2009) . Indeed, fast JL transforms (FJLT) and Gaussian Toeplitz matrices (Yi et al., 2015) , structured hashed projections (Choromanska et al., 2016) , iterative quantization (Gong et al., 2012b) , bilinear projection (Gong et al., 2013) , circulant binary embedding (Yu et al., 2014; Dirksen & Stollenwerk, 2018; 2017; Oymak et al., 2017; Kim et al., 2018) , sparse projection (Xia et al., 2015) , and fast orthogonal projection (Zhang et al., 2015) have all been considered. These methods can decrease time complexity to O(n log n) operations per embedding, but still suffer from some important drawbacks. Notably, due to the sign function, these algorithms completely discard all magnitude information, as sign(Ax) = sign(A(αx)) for all α > 0. So, all points in the same direction embed to the same binary vector and cannot be distinguished. Even if one settles for recovering geodesic distances, using the sign function in ( 2) is an instance of MSQ so the estimation error α in (1) decays slowly as the number of bits m increases (Yi et al., 2015) . In addition to the above data independent approaches, there are data dependent embedding methods for distance recovery, including product quantization (Jegou et al., 2010; Ge et al., 2013) , LSHbased methods (Andoni & Indyk, 2006; Shrivastava & Li, 2014; Datar et al., 2004 ) and iterative quantization (Gong et al., 2012c) . Their accuracy, which can be excellent, nevertheless depends on the underlying distribution of the input dataset. Moreover, they may be associated with larger time and space complexity for embedding the data. For example, product quantization performs k-means clustering in each subspace to find potential centroids and stores associated lookup tables. LSH-based methods need random shifts and dense random projections to quantize each input data point. Recently Huynh & Saab (2020) resolved these issues by replacing the simple sign function with a Sigma-Delta (Σ∆) quantization scheme, or alternatively other noise-shaping schemes (see (Chou & Güntürk, 2016) ) whose properties will be discussed in Section 3. They use the binary embedding q x := Q(DBx) (3) where Q is now a stable Σ∆ quantization scheme, D ∈ R m×m is a diagonal matrix with random signs, and B ∈ R m×n are specific structured random matrices. To give an example of Σ∆ quantization in this context, consider w := DBx. Then the simplest Σ∆ scheme computes q x via the following iteration, run for i = 1, ..., m:    u 0 = 0, q x (i) = sign(w i + u i-1 ), u i = u i-1 + w i -q i . (4) The choices of B in (Huynh & Saab, 2020) allow matrix vector multiplication to be implemented using the fast Fourier transform. Then the original Euclidean distance xy 2 can be recovered via a pseudo-metric on the quantized vectors given by d V (q x , q y ) := V (q x -q y ) 2 (5)

