MINIMALISTIC UNSUPERVISED REPRESENTATION LEARNING WITH THE SPARSE MANIFOLD TRANS-FORM

Abstract

We describe a minimalistic and interpretable method for unsupervised representation learning that does not require data augmentation, hyperparameter tuning, or other engineering designs, but nonetheless achieves performance close to the state-of-the-art (SOTA) SSL methods. Our approach leverages the sparse manifold transform [21], which unifies sparse coding, manifold learning, and slow feature analysis. With a one-layer deterministic (one training epoch) sparse manifold transform, it is possible to achieve 99.3% KNN top-1 accuracy on MNIST, 81.1% KNN top-1 accuracy on CIFAR-10, and 53.2% on CIFAR-100. With simple grayscale augmentation, the model achieves 83.2% KNN top-1 accuracy on CIFAR-10 and 57% on CIFAR-100. These results significantly close the gap between simplistic "white-box" methods and SOTA methods. We also provide visualization to illustrate how an unsupervised representation transform is formed. The proposed method is closely connected to latent-embedding self-supervised methods and can be treated as the simplest form of VICReg. Though a small performance gap remains between our simple constructive model and SOTA methods, the evidence points to this as a promising direction for achieving a principled and white-box approach to unsupervised representation learning, which has potential to significantly improve learning efficiency.

1. INTRODUCTION

Unsupervised representation learning (aka self-supervised representation learning) aims to build models that automatically find patterns in data and reveal these patterns explicitly with a representation. There has been tremendous progress over the past few years in the unsupervised representation learning community, and this trend promises unparalleled scalability for future data-driven machine learning. However, questions remain about what exactly a representation is and how it is formed in an unsupervised fashion. Furthermore, it is unclear whether there exists a set of common principles underlying all these unsupervised representations. Many investigators have appreciated the importance of improving our understanding of unsupervised representation learning and taken pioneering steps to simplify SOTA methods [18; 19; 114; 22] , to establish connections to classical methods [69; 4] , to unify different approaches [4; 39; 97; 62; 55] , to visualize the representation [9; 116; 15] , and to analyze the methods from a theoretical perspective [3; 45; 107; 4] . The hope is that such understanding will lead to a theory that enables us to build simple, fully explainable "white-box" models [14; 13; 71] from data based on first principles. Such a computational theory could guide us in achieving two intertwined fundamental goals: modeling natural signal statistics, and modeling biological sensory systems [83; 31; 32; 65] . Here, we take a small step toward this goal by building a minimalistic white-box unsupervised learning model without deep networks, projection heads, augmentation, or other similar engineering designs. By leveraging the classical unsupervised learning principles of sparsity [81; 82] and low-rank spectral embedding [89; 105], we build a two-layer model that achieves non-trivial benchmark results on several standard datasets. In particular, we show that a two-layer model based on the sparse manifold transform [21] , which shares the same objective as latent-embedding SSL methods [5],

