ANCHOR & TRANSFORM: LEARNING SPARSE EMBEDDINGS FOR LARGE VOCABULARIES

Abstract

Learning continuous representations of discrete objects such as text, users, movies, and URLs lies at the heart of many applications including language and user modeling. When using discrete objects as input to neural networks, we often ignore the underlying structures (e.g., natural groupings and similarities) and embed the objects independently into individual vectors. As a result, existing methods do not scale to large vocabulary sizes. In this paper, we design a simple and efficient embedding algorithm that learns a small set of anchor embeddings and a sparse transformation matrix. We call our method ANCHOR & TRANSFORM (ANT) as the embeddings of discrete objects are a sparse linear combination of the anchors, weighted according to the transformation matrix. ANT is scalable, flexible, and end-to-end trainable. We further provide a statistical interpretation of our algorithm as a Bayesian nonparametric prior for embeddings that encourages sparsity and leverages natural groupings among objects. By deriving an approximate inference algorithm based on Small Variance Asymptotics, we obtain a natural extension that automatically learns the optimal number of anchors instead of having to tune it as a hyperparameter. On text classification, language modeling, and movie recommendation benchmarks, we show that ANT is particularly suitable for large vocabulary sizes and demonstrates stronger performance with fewer parameters (up to 40× compression) as compared to existing compression baselines. Code for our experiments can be found at https://github.com/pliang279/ sparse_discrete.

1. INTRODUCTION

Most machine learning models, including neural networks, operate on vector spaces. Therefore, when working with discrete objects such as text, we must define a method of converting objects into vectors. The standard way to map objects to continuous representations involves: 1) defining the vocabulary V = {v 1 , ..., v V } as the set of all objects, and 2) learning a V × d embedding matrix that defines a d dimensional continuous representation for each object. This method has two main shortcomings. Firstly, when V is large (e.g., million of words/users/URLs), this embedding matrix does not scale elegantly and may constitute up to 80% of all trainable parameters (Jozefowicz et al., 2016) . Secondly, despite being discrete, these objects usually have underlying structures such as natural groupings and similarities among them. Assigning each object to an individual vector assumes independence and foregoes opportunities for statistical strength sharing. As a result, there has been a large amount of interest in learning sparse interdependent representations for large vocabularies rather than the full embedding matrix for cheaper training, storage, and inference. In this paper, we propose a simple method to learn sparse representations that uses a global set of vectors, which we call the anchors, and expresses the embeddings of discrete objects as a sparse linear combination of these anchors, as shown in Figure 1 . One can consider these anchors to represent latent topics or concepts. Therefore, we call the resulting method ANCHOR & TRANSFORM (ANT). The approach is reminiscent of low-rank and sparse coding approaches, however, surprisingly in the literature these methods were not elegantly integrated with deep networks. Competitive attempts are often complex (e.g., optimized with RL (Joglekar et al., 2019)), involve multiple training stages (Ginart et al., 2019; Liu et al., 2017) , or require post-processing (Svenstrup et al., 2017; Guo et al., 2017; Aharon et al., 2006; Awasthi & Vijayaraghavan, 2018) . We derive a simple optimization objective which learns these anchors and sparse transformations in an end-to-end manner. ANT is

