DEEP LEARNING MEETS PROJECTIVE CLUSTERING

Abstract

A common approach for compressing Natural Language Processing (NLP) networks is to encode the embedding layer as a matrix A ∈ R n×d , compute its rank-j approximation A j via SVD (Singular Value Decomposition), and then factor A j into a pair of matrices that correspond to smaller fully-connected layers to replace the original embedding layer. Geometrically, the rows of A represent points in R d , and the rows of A j represent their projections onto the j-dimensional subspace that minimizes the sum of squared distances ("errors") to the points. In practice, these rows of A may be spread around k > 1 subspaces, so factoring A based on a single subspace may lead to large errors that turn into large drops in accuracy. Inspired by projective clustering from computational geometry, we suggest replacing this subspace by a set of k subspaces, each of dimension j, that minimizes the sum of squared distances over every point (row in A) to its closest subspace. Based on this approach, we provide a novel architecture that replaces the original embedding layer by a set of k small layers that operate in parallel and are then recombined with a single fully-connected layer. Extensive experimental results on the GLUE benchmark yield networks that are both more accurate and smaller compared to the standard matrix factorization (SVD). For example, we further compress DistilBERT by reducing the size of the embedding layer by 40% while incurring only a 0.5% average drop in accuracy over all nine GLUE tasks, compared to a 2.8% drop using the existing SVD approach. On RoBERTa we achieve 43% compression of the embedding layer with less than a 0.8% average drop in accuracy as compared to a 3% drop previously.

1. INTRODUCTION AND MOTIVATION

Deep Learning revolutionized Machine Learning by improving the accuracy by dozens of percents for fundamental tasks in Natural Language Processing (NLP) through learning representations of a natural language via a deep neural network (Mikolov et al., 2013; Radford et al., 2018; Le and Mikolov, 2014; Peters et al., 2018; Radford et al., 2019) . Lately, it was shown that there is no need to train those networks from scratch each time we receive a new task/data, but to fine-tune a full pre-trained model on the specific task (Dai and Le, 2015; Radford et al., 2018; Devlin et al., 2019) . However, in many cases, those networks are extremely large compared to classical machine learning models. For example, both BERT (Devlin et al., 2019) and XLNet (Yang et al., 2019) have more than 110 million parameters, and RoBERTa (Liu et al., 2019b) consists of more than 125 million parameters. Such large networks have two main drawbacks: (i) they use too much storage, e.g. memory or disk space, which may be infeasible for small IoT devices, smartphones, or when a personalized network is needed for each user/object/task, and (ii) classification may take too much time, especially for real-time applications such as NLP tasks: speech recognition, translation or speech-to-text.

Compressed Networks.

To this end, many papers suggested different techniques to compress large NLP networks, e.g., by low-rank factorization (Wang et al., 2019; Lan et al., 2019) , prun- * equal contribution 1

