EFFECTIVE SELF-SUPERVISED PRE-TRAINING ON LOW-COMPUTE NETWORKS WITHOUT DISTILLATION

Abstract

Despite the impressive progress of self-supervised learning (SSL), its applicability to low-compute networks has received limited attention. Reported performance has trailed behind standard supervised pre-training by a large margin, barring selfsupervised learning from making an impact on models that are deployed on device. Most prior works attribute this poor performance to the capacity bottleneck of the low-compute networks and opt to bypass the problem through the use of knowledge distillation (KD). In this work, we revisit SSL for efficient neural networks, taking a closer look at what are the detrimental factors causing the practical limitations, and whether they are intrinsic to the self-supervised low-compute setting. We find that, contrary to accepted knowledge, there is no intrinsic architectural bottleneck, we diagnose that the performance bottleneck is related to the model complexity vs regularization strength trade-off. In particular, we start by empirically observing that the use of local views can have a dramatic impact on the effectiveness of the SSL methods. This hints at view sampling being one of the performance bottlenecks for SSL on low-capacity networks. We hypothesize that the view sampling strategy for large neural networks, which requires matching views in very diverse spatial scales and contexts, is too demanding for low-capacity architectures. We systematize the design of the view sampling mechanism, leading to a new training methodology that consistently improves the performance across different SSL methods (e.g. MoCo-v2, SwAV or DINO), different low-size networks (convolution-based networks, e.g. MobileNetV2, ResNet18, ResNet34 and vision transformer, e.g. ViT-Ti), and different tasks (linear probe, object detection, instance segmentation and semi-supervised learning). Our best models establish new state-of-the-art for SSL methods on low-compute networks despite not using a KD loss term.

1. INTRODUCTION

In this work, we revisit self-supervised learning (SSL) for low-compute neural networks. Previous research has shown that applying SSL methods to low-compute architectures leads to comparatively poor performance (Fang et al., 2021; Gao et al., 2022; Xu et al., 2022) , i.e. there is a large performance gap between fully-supervised and self-supervised pre-training on low-compute networks. For example, the linear probe vs supervised gap of MoCo-v2 (Chen et al., 2020c) on ImageNet1K is 5.0% for ResNet50 (71.1% vs 76.1%), while being 17.3% for ResNet18 (52.5% vs 69.8%) (Fang et al., 2021) . More importantly, while SSL pre-training for large models often exceeds supervised pre-training on a variety of downstream tasks, that is not the case for low-compute networks. Most prior works attribute the poor performance to the capacity bottleneck of the low-compute networks and resort to the use of knowledge distillation (Koohpayegani et al., 2020; Fang et al., 2021; Gao et al., 2022; Xu et al., 2022; Navaneet et al., 2021; Bhat et al., 2021) . While achieving significant gains over the stand-alone SSL models, distillation-based approaches mask the problem rather than resolve it. The extra overhead of large teacher models also makes it difficult to deploy these methods in resource-restricted scenarios, e.g. on-device. In this work, we re-examine the performance of SSL low-compute pre-training, aiming to diagnose the potential bottleneck. We find that the performance gap could be largely filled by the training recipe introduced in the recent self-supervised works (Caron et al., 2020; 2021) that leverages multiple views of the images. Comparing multiple views of the same image is the fundamental operation in the latest self-supervised models. For example, SimCLR (Chen et al., 2020a) learns to distinguish the positive and negative views by a contrastive loss. SwAV (Caron et al., 2020) learns to match the cluster assignments of the views. We revisit the process of creating and comparing the image views in prior works and observe that the configurations for high-capacity neural networks, e.g. ResNet50/101/152 (He et al., 2016) and ViT (Dosovitskiy et al., 2021) , are sub-optimal for low-capacity models as they typically lead to matching views in diverse spatial scales and contexts. For over-parameterized networks, this may not be as challenging, as verified in our experiments (sec. 4.3), but could even be considered as a manner of regularization. For lightweight networks, it results in performance degradation. This reveals a potentially overlooked issue for self-supervised learning: the trade-off between the model complexity and the regularization strength. With these findings, we further perform a systematic exploration of what aspects of the view-creation process lead to well-performing self-supervised models in the context of low-compute networks. We benchmark and ablate our new training paradigm in a variety of settings with different model architectures (MobileNet-V2 (Sandler et al., 2018) DINO (Caron et al., 2021) ). We report results on downstream visual recognition tasks, e.g. semi-supervised visual recognition, object detection, instance segmentation. Our method outperforms the previous state-of-the-art approaches despite not relying on knowledge distillation. Our contributions are summarized as follows: (1) We revisit SSL for low-compute pre-training and demonstrate that, contrary to prior belief, efficient networks can learn high quality visual representations from self-supervised signals alone, rather than relying on knowledge distillation; (2) We experimentally show that SSL low-compute pre-training can benefit from a weaker selfsupervised target that learns to match views in more comparable spatial scales and contexts, suggesting a potentially overlooked aspect in self-supervised learning that the pretext supervision should be adaptive to the network capacity; (3) With a systematic exploration of the view sampling mechanism, our new training recipe consistently improves multiple self-supervised learning approaches (e.g. MoCo-v2, SwAV, DINO) on a wide spectrum of low-size networks, including both convolutional neural networks (e.g. MobileNetV2, ResNet18, ResNet34) and the vision transformer (e.g. ViT-Ti), even surpassing the state-of-the-arts distillation-based approaches.

2. RELATED WORK

Self-supervised learning. The latest self-supervised models typically rely on contrastive learning, consistency regularization, and masked image modeling. Contrastive approaches learn to pull together different views of the same image (positive pairs) and push apart the ones that correspond to different images (negative pairs). In practice, these methods require a large number of negative samples. SimCLR (Chen et al., 2020a) uses negative samples coexisting in the current batch, thus requiring large batches, and MoCo (He et al., 2020) maintains a queue of negative samples and a momentum encoder to improve the consistency of the queue. Other attempts show that visual representations can be learned without discriminating samples but instead matching different views of the same image. BYOL (Grill et al., 2020) and DINO (Caron et al., 2021) start from an augmented view of an image and train the online network (a.k.a student) to predict the representation of another augmented view of the same image obtained from the target network (a.k.a teacher). In such approaches, the target (teacher) network is updated with a slow-moving average of the online (student) network. SwAV (Caron et al., 2020) introduces an online clustering-based approach that enforces the consistency between cluster assignments produced from different views of the same sample. Most recently, masked token prediction, originally developed for natural language processing, has been shown to be an effective pretext task for vision transformers. BEiT (Bao et al., 2022) adapts BERT (Devlin et al., 2019) for visual recognition by predicting the visual words (Ramesh et al., 2021) of the masked patches. SimMIM (Xie et al., 2022) extends BEiT by reconstructing the masked pixels directly. MAE (He et al., 2022) simplifies the pre-training pipeline by only encoding a small set of visible patches. Self-supervised learning for efficient networks. Recent works have shown that the performance of self-supervised pre-training for low-compute network architectures trails behind standard supervised pre-training by a large margin, barring self-supervised learning from making an impact on models that are deployed on devices. One natural choice to address this problem is incorporating Knowledge



, ResNet18/34/50 (He et al., 2016), ViT-S/Ti (Dosovitskiy et al., 2021)), and different self-supervised signals (MoCo-v2 (Chen et al., 2020c), SwAV (Caron et al., 2020),

