MINIMALISTIC UNSUPERVISED REPRESENTATION LEARNING WITH THE SPARSE MANIFOLD TRANS-FORM

Abstract

We describe a minimalistic and interpretable method for unsupervised representation learning that does not require data augmentation, hyperparameter tuning, or other engineering designs, but nonetheless achieves performance close to the state-of-the-art (SOTA) SSL methods. Our approach leverages the sparse manifold transform [21], which unifies sparse coding, manifold learning, and slow feature analysis. With a one-layer deterministic (one training epoch) sparse manifold transform, it is possible to achieve 99.3% KNN top-1 accuracy on MNIST, 81.1% KNN top-1 accuracy on CIFAR-10, and 53.2% on CIFAR-100. With simple grayscale augmentation, the model achieves 83.2% KNN top-1 accuracy on CIFAR-10 and 57% on CIFAR-100. These results significantly close the gap between simplistic "white-box" methods and SOTA methods. We also provide visualization to illustrate how an unsupervised representation transform is formed. The proposed method is closely connected to latent-embedding self-supervised methods and can be treated as the simplest form of VICReg. Though a small performance gap remains between our simple constructive model and SOTA methods, the evidence points to this as a promising direction for achieving a principled and white-box approach to unsupervised representation learning, which has potential to significantly improve learning efficiency.

1. INTRODUCTION

Unsupervised representation learning (aka self-supervised representation learning) aims to build models that automatically find patterns in data and reveal these patterns explicitly with a representation. There has been tremendous progress over the past few years in the unsupervised representation learning community, and this trend promises unparalleled scalability for future data-driven machine learning. However, questions remain about what exactly a representation is and how it is formed in an unsupervised fashion. Furthermore, it is unclear whether there exists a set of common principles underlying all these unsupervised representations. Many investigators have appreciated the importance of improving our understanding of unsupervised representation learning and taken pioneering steps to simplify SOTA methods [18; 19; 114; 22] , to establish connections to classical methods [69; 4] , to unify different approaches [4; 39; 97; 62; 55] , to visualize the representation [9; 116; 15] , and to analyze the methods from a theoretical perspective [3; 45; 107; 4] . The hope is that such understanding will lead to a theory that enables us to build simple, fully explainable "white-box" models [14; 13; 71] from data based on first principles. Such a computational theory could guide us in achieving two intertwined fundamental goals: modeling natural signal statistics, and modeling biological sensory systems [83; 31; 32; 65] . Here, we take a small step toward this goal by building a minimalistic white-box unsupervised learning model without deep networks, projection heads, augmentation, or other similar engineering designs. By leveraging the classical unsupervised learning principles of sparsity [81; 82] and low-rank spectral embedding [89; 105], we build a two-layer model that achieves non-trivial benchmark results on several standard datasets. In particular, we show that a two-layer model based on the sparse manifold transform [21] , which shares the same objective as latent-embedding SSL methods [5], achieves 99.3% KNN top-1 accuracy on MNIST, 81.1% KNN top-1 accuracy on CIFAR-10, and 53.2% on CIFAR-100 without data augmentation. With simple grayscale augmentation, it achieves 83.2% KNN top-1 accuracy on CIFAR-10 and 57% KNN top-1 accuracy on CIFAR-100. These results close the gap between a white-box model and the SOTA SSL models [18; 20; 5; 117] . Though the gap remains, narrowing it further potentially leads to a deeper understanding of unsupervised representation learning, and this is a promising path towards a useful theory. We begin the technical discussion by addressing three fundamental questions. In Section 2, we then revisit the formulation of SMT from a more general perspective and discuss how it can solve various unsupervised representation learning problems. In Section 3, we present benchmark results on MNIST, CIFAR-10, and CIFAR-100, together with visualization and ablations. Additional related literature is addressed in Section 4, and we offer further discussion in Section 5. This paper makes four main contributions. 1) The original SMT paper explains SMT from only a manifold learning perspective. We provide a novel and crucial interpretation of SMT from a probabilistic cooccurrence point of view. 2) The original SMT paper potentially creates the misleading impression that time is necessary to establish this transformation. However, this is not true. In this paper, we explain in detail how different kinds of localities (or similarities) can be leveraged to establish the transform. 3) We provide benchmark results that support the theoretical proposal in SMT. 4) We provide a connection between SMT and VICReg (and other SSL methods). Because SMT is built purely from neural and statistical principles, this leads to a better understanding of self-supervised learning models.

Three fundamental questions:

What is an unsupervised (self-supervised) representation? Any non-identity transformation of the original signal can be called a representation. One general goal in unsupervised representation learning is to find a function that transforms raw data into a new space such that "similar" things are placed closer to each other and the new space isn't simply a collapsed trivial spacefoot_0 . That is, the important geometric or stochastic structure of the data must be preserved. If this goal is achieved, then naturally "dissimilar" things would be placed far apart in the representation space. Where does "similarity" come from? "Similarity" comes from three classical ideas, which have been proposed multiple times in different contexts: 1) temporal co-occurrence [112; 119], 2) spatial co-occurrence [90; 28; 34; 103; 84] , and 3) local metric neighborhoods [89; 105] in the raw signal space. These ideas overlap to a considerable extent when the underlying structure is geometric,foot_1 but they can also differ conceptually when the structure is stochastic. In Figure 1 , we illustrate the difference between a manifold structure and a stochastic co-occurrence structure. Leveraging these similarities, two unsupervised representation learning methods emerge from these ideas: manifold learning and co-occurrence statistics modeling. Interestingly, many of these ideas reach a low-rank spectral decomposition formulation or a closely related matrix factorization formulation [112; 89; 105; 27; 54; 44; 86; 46; 21] . The philosophy of manifold learning is that only local neighborhoods in the raw signal space can be trusted and that global geometry emerges by considering all local neighborhoods togetherthat is, "think globally, fit locally" [92] . In contrast, co-occurrence [25] statistics modeling offers a probabilistic view, which complements the manifold view as many structures cannot be modeled by continuous manifolds. A prominent example comes from natural language, where the raw data does not come from a smooth geometry. In word embedding [77; 78; 86], "Seattle" and "Dallas" might reach a similar embedding though they do not co-occur the most frequently. The underlying reason is that they share similar context patterns [68; 118] . The probabilistic and manifold points of view complement each other for understanding "similarity." With a definition of similarity, the next step is to construct a non-trivial transform such that similar things are placed closer to one another. How do we establish the representation transform? Through the parsimonious principles of sparsity and low-rank spectral embedding. The general idea is that we can use sparsity to decompose and tile the data space to build a support, see Figure 2(a, b ). Then, we can construct our representation transform with low-frequency functions, which assign similar values to similar points on the support, see Figure 2(c ). This process is called the sparse manifold transform [21] .



An obvious trivial solution is to collapse every data point to a single point in the new space. If we consider that signals which temporally or spatially co-occur are related by a smooth transformation, then these three ideas are equivalent.

