MASKED IMAGE MODELING WITH DENOISING CONTRAST

Abstract

Since the development of self-supervised visual representation learning from contrastive learning to masked image modeling (MIM), there is no significant difference in essence, that is, how to design proper pretext tasks for vision dictionary look-up. MIM recently dominates this line of research with state-of-theart performance on vision Transformers (ViTs), where the core is to enhance the patch-level visual context capturing of the network via denoising auto-encoding mechanism. Rather than tailoring image tokenizers with extra training stages as in previous works, we unleash the great potential of contrastive learning on denoising auto-encoding and introduce a pure MIM method, ConMIM, to produce simple intra-image inter-patch contrastive constraints as the sole learning objectives for masked patch prediction. We further strengthen the denoising mechanism with asymmetric designs, including image perturbations and model progress rates, to improve the network pre-training. ConMIM-pretrained models with various scales achieve competitive results on downstream image classification, semantic segmentation, object detection, and instance segmentation tasks, e.g., on ImageNet-1K classification, we achieve 83.9% top-1 accuracy with ViT-Small and 85.3% with ViT-Base without extra data for pre-training.

1. INTRODUCTION

The great success of self-supervised learning in natural language processing (NLP) tasks, e.g., BERT (Devlin et al., 2019) and GPT (Radford et al., 2018; 2019) , has sparked several revolutions in visual representation learning, during which the development of vision dictionary look-up is the most critical. In the age of convolutional neural networks (CNNs) (He et al., 2016; Krizhevsky et al., 2012 ), prominent works (He et al., 2020; Chen et al., 2020) perform self-supervised learning with a pretext task of instance-level dictionary look-up via contrastive learning as demonstrated in Figure 1(a) . With the advent of vision Transformers (ViTs) (Dosovitskiy et al., 2021) , the gap between vision and NLP tasks has been further narrowed since the introduction of patch-level dictionary look-up via masked image modeling in a pioneer work BEiT (Bao et al., 2022) (see Figure 1(b) ). The introduction of masked image modeling (Bao et al., 2022) , inspired by masked language modeling (Devlin et al., 2019) in NLP tasks, ushers in a new fad for self-supervised learning using vision Transformers (Dosovitskiy et al., 2021) , i.e., a portion of vision tokens are randomly masked and then recovered by the Transformer network being trained. Concurrent works (Dong et al., 2021; Li et al., 2022; Wei et al., 2022 ) make efforts to design patch-level dictionaries, image tokenizers in other words, to build proper learning objectives (i.e., vision token ids) for masked image modeling. Though advanced results can be achieved, the off-the-shelf image tokenizers, e.g., discrete VAE (Ramesh et al., 2021) used in BEiT (Bao et al., 2022) , depend on extra training stages and data knowledge, rendering an inflexible two-stage pre-training paradigm. We would like to call for a revisit of the superiority of masked image modeling over contrastive learning on self-supervised learning with vision Transformers. Since they are essentially both designed

availability

Code will be available at https://github.com/TencentARC

