VIEWMAKER NETWORKS: LEARNING VIEWS FOR UNSUPERVISED REPRESENTATION LEARNING

Abstract

Many recent methods for unsupervised representation learning train models to be invariant to different "views," or distorted versions of an input. However, designing these views requires considerable trial and error by human experts, hindering widespread adoption of unsupervised representation learning methods across domains and modalities. To address this, we propose viewmaker networks: generative models that learn to produce useful views from a given input. Viewmakers are stochastic bounded adversaries: they produce views by generating and then adding an `p-bounded perturbation to the input, and are trained adversarially with respect to the main encoder network. Remarkably, when pretraining on CIFAR-10, our learned views enable comparable transfer accuracy to the welltuned SimCLR augmentations-despite not including transformations like cropping or color jitter. Furthermore, our learned views significantly outperform baseline augmentations on speech recordings (+9 points on average) and wearable sensor data (+17 points on average). Viewmaker views can also be combined with handcrafted views: they improve robustness to common image corruptions and can increase transfer performance in cases where handcrafted views are less explored. These results suggest that viewmakers may provide a path towards more general representation learning algorithms-reducing the domain expertise and effort needed to pretrain on a much wider set of domains. Code is available at https://github.com/alextamkin/viewmaker. Figure 1: Viewmaker networks generate complex and diverse input-dependent views for unsupervised learning. Examples shown are for CIFAR-10. Original image in center with pink border.

1. INTRODUCTION

Unsupervised representation learning has made significant recent strides, including in computer vision, where view-based methods have enabled strong performance on benchmark tasks (Wu et al., 2018; Oord et al., 2018; Bachman et al., 2019; Zhuang et al., 2019; Misra & Maaten, 2020; He et al., 2020; Chen et al., 2020a) . Views here refer to human-defined data transformations, which target capabilities or invariances thought to be useful for transfer tasks. In particular, in contrastive learning of visual representations, models are trained to maximize the mutual information between different views of an image, including crops, blurs, noise, and changes to color and contrast (Bachman et al., 2019; Chen et al., 2020a) . Much work has investigated the space of possible image views (and their compositions) and understanding their effects on transfer learning (Chen et al., 2020a; Wu et al., 2020; Tian et al., 2019; Purushwalkam & Gupta, 2020) . The fact that views must be hand designed is a significant limitation. While views for image classification have been refined over many years, new views must be developed from scratch for new modalities. Making matters worse, even within a modality, different domains may have different optimal views (Purushwalkam & Gupta, 2020) . Previous studies have investigated the properties of good views through the lens of mutual information (Tian et al., 2020; Wu et al., 2020) , but a broadly-applicable approach for learning views remains unstudied. In this work, we present a general method for learning diverse and useful views for contrastive learning. Rather than searching through possible compositions of existing view functions (Cubuk et al., 2018; Lim et al., 2019) , which may not be available for many modalities, our approach produces views with a generative model, called the viewmaker network, trained jointly with the encoder network. This flexibility enables learning a broad set of possible view functions, including input-dependent views, without resorting to hand-crafting or expert domain knowledge. The viewmaker network is trained adversarially to create views which increase the contrastive loss of the encoder network. Rather than directly outputting views for an image, the viewmaker instead outputs a stochastic perturbation that is added to the input. This perturbation is projected onto an `p sphere, controlling the effective strength of the view, similar to methods in adversarial robustness. This constrained adversarial training method enables the model to reduce the mutual information between different views while preserving useful input features for the encoder to learn from. In summary, we contribute: 1. Viewmaker networks: to our knowledge the first modality-agnostic method to learn views for unsupervised representation learning 2. On image data, where expert-designed views have been extensively optimized, our viewmaker-models achieve comparable transfer performance to state of the art contrastive methods while being more robust to common corruptions. 3. On speech data, our method significantly outperforms existing human-defined views on a range of speech recognition transfer tasks. 4. On time-series data from wearable sensors, our model significantly outperforms baseline views on the task of human activity recognition (e.g., cycling, running, jumping rope).

2. RELATED WORK

Unsupervised representation learning Learning useful representations from unlabeled data is a fundamental problem in machine learning (Pan & Yang, 2009; Bengio et al., 2013) . A recently successful framework for unsupervised representation learning for images involves training a model to be invariant to various data transformations (Bachman et al., 2019; Misra & Maaten, 2020) , although the idea has much earlier roots (Becker & Hinton, 1992; Hadsell et al., 2006; Dosovitskiy et al., 2014) . This idea has been expanded by a number of contrastive learning approaches which push embeddings of different views, or transformed inputs, closer together, while pushing other pairs apart (Tian et al., 2019; He et al., 2020; Chen et al., 2020a; b; c) , as well as non-contrastive approaches which do not explicitly push apart unmatched views (Grill et al., 2020; Caron et al., 2020) . Related but more limited setups have been explored for speech, where data augmentation strategies are less explored (Oord et al., 2018; Kharitonov et al., 2020) . et al., 2020; Lopes et al., 2019; Perez & Wang, 2017; Yun et al., 2019; Zhang et al., 2017 ), speech (Park et al., 2019; Kovács et al., 2017; Tóth et al., 2018; Kharitonov et al., 2020) , or in feature space (DeVries & Taylor, 2017).



Several works have studied the role of views in contrastive learning, including from a mutual-information perspective (Wu et al., 2020), in relation to specific transfer tasks (Tian et al., 2019), with respect to different kinds of invariances (Purushwalkam & Gupta, 2020), or via careful empirical studies (Chen et al., 2020a). Outside of a contrastive learning framework, Gontijo-Lopes et al. (2020) study how data augmentation aids generalization in vision models. Much work has explored different handcrafted data augmentation methods for supervised learning of images (Hendrycks

