BEIT V2: MASKED IMAGE MODELING WITH VECTOR-QUANTIZED VISUAL TOKENIZERS Anonymous

Abstract

Masked image modeling (MIM) has demonstrated impressive results in selfsupervised representation learning by recovering corrupted image patches. However, most existing studies operate on low-level image pixels, which hinders the exploitation of high-level semantics for representation models. In this work, we propose to use a semantic-rich visual tokenizer as the reconstruction target for masked prediction, providing a systematic way to promote MIM from pixel-level to semantic-level. Specifically, we propose vector-quantized knowledge distillation to train the tokenizer, which discretizes a continuous semantic space to compact codes. We then pretrain vision Transformers by predicting the original visual tokens for the masked image patches. Furthermore, we introduce a patch aggregation strategy which associates discrete image patches to enhance global semantic representation. Experiments on image classification and semantic segmentation show that BEIT V2 outperforms all compared MIM methods. On ImageNet-1K (224 size), the base-size BEIT V2 achieves 85.5% top-1 accuracy for fine-tuning and 80.1% top-1 accuracy for linear probing. The large-size BEIT V2 obtains 87.3% top-1 accuracy for ImageNet-1K (224 size) fine-tuning, and 56.7% mIoU on ADE20K for semantic segmentation. The code can be found in the supplementary materials.

1. INTRODUCTION

Masked image modeling (MIM), which greatly relieves the annotation-hungry issue of vision Transformers, has demonstrated great potential in learning visual representations (Bao et al., 2022; He et al., 2022) . Given an image, the pretraining objective of MIM is to recover the masked patches so that rich context information is captured by the representation model. Taking BEiT (Bao et al., 2022) as an example, each image has two views during pretraining, i.e., image patches, and visual tokens. The original image is first tokenized to discrete tokens. Randomly sampled image patches are then masked before being fed to vision Transformers. The pretraining objective is to recover the original visual tokens based on the corrupted image patches. The pretrained vision encoder can be deployed and finetuned on various downstream tasks by appending lightweight task layers. Existing MIM approaches can be coarsely categorized to three according to the reconstruction targets: low-level image elements (e.g., raw pixels; He et al. 2022; Fang et al. 2022; Liu et al. 2022), handcrafted features (e.g., HOG features; Wei et al. 2021), and visual tokens; Bao et al. 2022; Wang et al. 2022; Dong et al. 2021; El-Nouby et al. 2021; Chen et al. 2022 . However, all the reconstruction targets are about, explicitly or implicitly, low-level image elements while underestimating high-level semantics. In comparison, the masked words in language modeling (Devlin et al., 2019 ) are all about high-level semantics, which motivates us to tap the potential of MIM by exploiting semantic-aware supervision during pretraining. In this work, we propose a self-supervised representation learning approach, termed BEIT V2, with the aim to improve MIM pretraining by constructing a semantic-aware visual tokenizer. Our approach is developed on the BEIT method which is simple yet effective. The novelty lies in introducing the Vector-Quantized Knowledge Distillation (VQ-KD) algorithm to discretize a semantic space. The VQ-KD encoder first converts the input image to discrete tokens according to a learnable codebook. The decoder then learns to reconstruct the semantic features encoded by a teacher model, conditioning on the discrete tokens. After training VQ-KD, its encoder is used as a semantic visual tokenizer for BEIT pretraining, where the discrete codes serve as supervision signals. We conduct self-supervised learning on ImageNet-1k for both base-and large-size vision Transformers, which are evaluated on downstream tasks, e.g., image classification, linear probing, and semantic segmentation. As shown in Figure 1 , BEIT V2 outperforms previous self-supervised learning algorithms by a large margin on ImageNet fine-tuning, e.g., improving over BEIT (Bao et al., 2022) by about two points for both ViT-B/16 and ViT-L/16. BEIT V2 outperforms all compared MIM methods on ImageNet linear probing while achieving large performance gains on ADE20k for semantic segmentation.

Pretraining Epochs

The contributions of this work are summarized as follows: • We propose vector-quantized knowledge distillation, promoting masked image modeling from pixel-level to semantic-level for self-supervised representation learning. • We introduce a patch aggregation strategy, which enforces global structure given discrete semantic tokens, and improves the performance of learned representations. • We conduct extensive experiments on downstream tasks including ImageNet fine-tuning, linear probing, and semantic segmentation. Experimental results show that the proposed approach significantly improves performance across model sizes, training steps, and downstream tasks.

2. METHODOLOGY

BEIT V2 inherits the masked image modeling framework defined by BEIT (Bao et al., 2022) , which uses a visual tokenizer to convert each image to a set of discrete visual tokens. The training target is to recover the masked visual tokens, each of which corresponds to an image patch. In Section 2.2, we introduce a vector-quantized knowledge distillation algorithm, which is used to train a visual tokenizer. In Section 2.3, we employ the visual tokenizer for BEIT pretraining with the help of the patch aggregation strategy. 



Figure 1: Top-1 fine-tuning accuracy on ImageNet (224 size). Left: ViT-B/16. right: ViT-L/16.

IMAGE REPRESENTATION The vision Transformers (ViTs; Dosovitskiy et al. 2020) are employed as the backbone networks to obtain image representations. The input image x ∈ R H×W ×C is reshaped to N = HW /P 2 patches

