EFFECTIVE SELF-SUPERVISED PRE-TRAINING ON LOW-COMPUTE NETWORKS WITHOUT DISTILLATION

Abstract

Despite the impressive progress of self-supervised learning (SSL), its applicability to low-compute networks has received limited attention. Reported performance has trailed behind standard supervised pre-training by a large margin, barring selfsupervised learning from making an impact on models that are deployed on device. Most prior works attribute this poor performance to the capacity bottleneck of the low-compute networks and opt to bypass the problem through the use of knowledge distillation (KD). In this work, we revisit SSL for efficient neural networks, taking a closer look at what are the detrimental factors causing the practical limitations, and whether they are intrinsic to the self-supervised low-compute setting. We find that, contrary to accepted knowledge, there is no intrinsic architectural bottleneck, we diagnose that the performance bottleneck is related to the model complexity vs regularization strength trade-off. In particular, we start by empirically observing that the use of local views can have a dramatic impact on the effectiveness of the SSL methods. This hints at view sampling being one of the performance bottlenecks for SSL on low-capacity networks. We hypothesize that the view sampling strategy for large neural networks, which requires matching views in very diverse spatial scales and contexts, is too demanding for low-capacity architectures. We systematize the design of the view sampling mechanism, leading to a new training methodology that consistently improves the performance across different SSL methods (e.g. MoCo-v2, SwAV or DINO), different low-size networks (convolution-based networks, e.g. MobileNetV2, ResNet18, ResNet34 and vision transformer, e.g. ViT-Ti), and different tasks (linear probe, object detection, instance segmentation and semi-supervised learning). Our best models establish new state-of-the-art for SSL methods on low-compute networks despite not using a KD loss term.

1. INTRODUCTION

In this work, we revisit self-supervised learning (SSL) for low-compute neural networks. Previous research has shown that applying SSL methods to low-compute architectures leads to comparatively poor performance (Fang et al., 2021; Gao et al., 2022; Xu et al., 2022) , i.e. there is a large performance gap between fully-supervised and self-supervised pre-training on low-compute networks. For example, the linear probe vs supervised gap of MoCo-v2 (Chen et al., 2020c) on ImageNet1K is 5.0% for ResNet50 (71.1% vs 76.1%), while being 17.3% for ResNet18 (52.5% vs 69.8%) (Fang et al., 2021) . More importantly, while SSL pre-training for large models often exceeds supervised pre-training on a variety of downstream tasks, that is not the case for low-compute networks. Most prior works attribute the poor performance to the capacity bottleneck of the low-compute networks and resort to the use of knowledge distillation (Koohpayegani et al., 2020; Fang et al., 2021; Gao et al., 2022; Xu et al., 2022; Navaneet et al., 2021; Bhat et al., 2021) . While achieving significant gains over the stand-alone SSL models, distillation-based approaches mask the problem rather than resolve it. The extra overhead of large teacher models also makes it difficult to deploy these methods in resource-restricted scenarios, e.g. on-device. In this work, we re-examine the performance of SSL low-compute pre-training, aiming to diagnose the potential bottleneck. We find that the performance gap could be largely filled by the training recipe introduced in the recent self-supervised works (Caron et al., 2020; 2021) that leverages multiple views of the images. Comparing multiple views of the same image is the fundamental operation in the latest self-supervised models. For example, SimCLR (Chen et al., 2020a) learns to distinguish the positive and negative views by a contrastive loss. SwAV (Caron et al., 2020) learns to match the cluster assignments of

