BEIT V2: MASKED IMAGE MODELING WITH VECTOR-QUANTIZED VISUAL TOKENIZERS Anonymous

Abstract

Masked image modeling (MIM) has demonstrated impressive results in selfsupervised representation learning by recovering corrupted image patches. However, most existing studies operate on low-level image pixels, which hinders the exploitation of high-level semantics for representation models. In this work, we propose to use a semantic-rich visual tokenizer as the reconstruction target for masked prediction, providing a systematic way to promote MIM from pixel-level to semantic-level. Specifically, we propose vector-quantized knowledge distillation to train the tokenizer, which discretizes a continuous semantic space to compact codes. We then pretrain vision Transformers by predicting the original visual tokens for the masked image patches. Furthermore, we introduce a patch aggregation strategy which associates discrete image patches to enhance global semantic representation. Experiments on image classification and semantic segmentation show that BEIT V2 outperforms all compared MIM methods. On ImageNet-1K (224 size), the base-size BEIT V2 achieves 85.5% top-1 accuracy for fine-tuning and 80.1% top-1 accuracy for linear probing. The large-size BEIT V2 obtains 87.3% top-1 accuracy for ImageNet-1K (224 size) fine-tuning, and 56.7% mIoU on ADE20K for semantic segmentation. The code can be found in the supplementary materials.

1. INTRODUCTION

Masked image modeling (MIM), which greatly relieves the annotation-hungry issue of vision Transformers, has demonstrated great potential in learning visual representations (Bao et al., 2022; He et al., 2022) . Given an image, the pretraining objective of MIM is to recover the masked patches so that rich context information is captured by the representation model. Taking BEiT (Bao et al., 2022) as an example, each image has two views during pretraining, i.e., image patches, and visual tokens. The original image is first tokenized to discrete tokens. Randomly sampled image patches are then masked before being fed to vision Transformers. The pretraining objective is to recover the original visual tokens based on the corrupted image patches. The pretrained vision encoder can be deployed and finetuned on various downstream tasks by appending lightweight task layers. Existing MIM approaches can be coarsely categorized to three according to the reconstruction targets: low-level image elements (e.g., raw pixels; He et al. 2022; Fang et al. 2022; Liu et al. 2022) , handcrafted features (e.g., HOG features; Wei et al. 2021) , and visual tokens; Bao et al. 2022; Wang et al. 2022; Dong et al. 2021; El-Nouby et al. 2021; Chen et al. 2022 . However, all the reconstruction targets are about, explicitly or implicitly, low-level image elements while underestimating high-level semantics. In comparison, the masked words in language modeling (Devlin et al., 2019 ) are all about high-level semantics, which motivates us to tap the potential of MIM by exploiting semantic-aware supervision during pretraining. In this work, we propose a self-supervised representation learning approach, termed BEIT V2, with the aim to improve MIM pretraining by constructing a semantic-aware visual tokenizer. Our approach is developed on the BEIT method which is simple yet effective. The novelty lies in introducing the Vector-Quantized Knowledge Distillation (VQ-KD) algorithm to discretize a semantic space. The VQ-KD encoder first converts the input image to discrete tokens according to a learnable codebook. The decoder then learns to reconstruct the semantic features encoded by a teacher model, conditioning on the discrete tokens. After training VQ-KD, its encoder is used as a semantic visual tokenizer for BEIT pretraining, where the discrete codes serve as supervision signals. Considering the discreteness of tokens, we further introduce a patch aggregation strategy which explicitly encourages the [CLS] token to associate all patches (Gao & Callan, 2021) . Such a strategy resolves the issue that MIM put patch reconstruction the first place which diminishes learning global image representations. As a result, BEIT V2 improves the capacity of learned image representation, as supported by the linear probing experiments. Moreover, the enhanced representations also boosts the performance of other tasks.

Pretraining Epochs

We conduct self-supervised learning on ImageNet-1k for both base-and large-size vision Transformers, which are evaluated on downstream tasks, e.g., image classification, linear probing, and semantic segmentation. As shown in Figure 1 , BEIT V2 outperforms previous self-supervised learning algorithms by a large margin on ImageNet fine-tuning, e.g., improving over BEIT (Bao et al., 2022) by about two points for both ViT-B/16 and ViT-L/16. BEIT V2 outperforms all compared MIM methods on ImageNet linear probing while achieving large performance gains on ADE20k for semantic segmentation. The contributions of this work are summarized as follows: • We propose vector-quantized knowledge distillation, promoting masked image modeling from pixel-level to semantic-level for self-supervised representation learning. • We introduce a patch aggregation strategy, which enforces global structure given discrete semantic tokens, and improves the performance of learned representations. • We conduct extensive experiments on downstream tasks including ImageNet fine-tuning, linear probing, and semantic segmentation. Experimental results show that the proposed approach significantly improves performance across model sizes, training steps, and downstream tasks.

2. METHODOLOGY

BEIT V2 inherits the masked image modeling framework defined by BEIT (Bao et al., 2022) , which uses a visual tokenizer to convert each image to a set of discrete visual tokens. The training target is to recover the masked visual tokens, each of which corresponds to an image patch. In Section 2.2, we introduce a vector-quantized knowledge distillation algorithm, which is used to train a visual tokenizer. In Section 2.3, we employ the visual tokenizer for BEIT pretraining with the help of the patch aggregation strategy.

2.1. IMAGE REPRESENTATION

The vision Transformers (ViTs; Dosovitskiy et al. 2020) 

2.2. TRAINING VISUAL TOKENIZER

We propose vector-quantized knowledge distillation (VQ-KD) to train the visual tokenizer, Figure 2 , where the visual tokenizer and the decoder are two vital modules. The visual tokenizer maps an image to a sequence of visual tokens, a.k.a., discrete codes. To be specific, an image x is tokenized to z ) , where the visual vocabulary (a.k.a., codebook) V ∈ R K×D contains K discrete codebook embeddings. = [z 1 , z 2 , • • • , z N ] ∈ V (H/P )×(W/P The tokenizer is consist of a vision Transformer encoder, and a quantizer. The tokenizer first encodes the input image to vectors. Then, the vector quantizer looks up the nearest neighbor in the codebook for each patch representation h i . Let {v 1 , v 2 , • • • , v K } denote the codebook embeddings. For the i-th image patch, its quantized code is calculated as z i = arg min j ||ℓ 2 (h i ) -ℓ 2 (v j )|| 2 , where j ∈ {1, 2, • • • , K} and ℓ 2 normalization is used for codebook lookup (Yu et al., 2021) . The above distance is equivalent to finding codes according to cosine similarity. After quantizing the image to visual tokens, we feed the ℓ 2 -normalized codebook embeddings {ℓ 2 (v zi )} N i=1 to the decoder. The decoder is also a multi-layer Transformer. The output vectors {o i } N i=1 aim at reconstructing the semantic features of a teacher model, e.g., DINO (Caron et al., 2021) , and CLIP (Radford et al., 2021) . Let t i denote the teacher model's feature vector of the i-th image patch. During training, we maximize the cosine similarity between the decoder output o i and the teacher guidance t i . Because the quantization process (Equation 1) is non-differentiable, the gradients are directly copied from the decoder input to the encoder output (van den Oord et al., 2017) , Figure 2 , to back-propagate gradients to the encoder. Intuitively, the quantizer looks up the nearest code for each encoder output, while the gradients of codebook embeddings indicate useful optimization directions for the encoder. The training objective of VQ-KD is defined as max x∈D N i=1 cos (o i , t i ) -||sg[ℓ 2 (h i )] -ℓ 2 (v zi )|| 2 2 -||ℓ 2 (h i ) -sg[ℓ 2 (v zi )]|| 2 2 , where sg[•] stands for the stop-gradient operator which is an identity at the forward pass while having zero gradients during the backward pass. D represents the image data used for tokenizer training. 1shows that we compute the ℓ 2 -normalized distance to find the nearest code while reducing the dimension of codebook embedding space to 32-d. The low-dimensional codebook embeddings are mapped back to higher-dimensional space before being fed to the decoder. Exponential moving average (van den Oord et al., 2017) is employed to update the codebook embeddings. Exponential moving average tends to be more stable for VQ-KD training.

2.3. PRETRAINING BEIT V2

We follow the MIM setup in BEIT (Bao et al., 2022) to pretrain vision Transformers for image representations. Given an input image x, around 40% image patches are block-wisely chosen and masked. The masked position is termed as M. Then, a shared learnable embedding e [M] is used to replace the original image patch embeddings e p i if i ∈ M: x M i = δ(i ∈ M) ⊙ e [M] + (1 -δ(i ∈ M)) ⊙ x p i , where δ(•) is the indicator function. Subsequently, we prepend a learnable [CLS] token to the input, i.e., [e CLS , {x M i } N i=1 ], and feed them to the vision Transformer. The final encoding vectors are denoted as {h i } N i=0 , where h 0 is for the [CLS] token. Next, we instantiate the MIM head as a simple fully-connection layer, and then use it to predict the visual tokens of the masked positions based on the corrupted image x M . For each masked position {h i : i ∈ M} N i=1 , a softmax classifier predicts the visual tokens p(z i |h i ) = softmax zi (W c h i + b c ), where W c , b c respectively mean weights and biases of the MIM head. The visual tokens are obtained by the tokenizer trained in Section 2.2, which provides supervisions for the MIM self-supervised learning procedure. The training loss of MIM is defined as L MIM = - x∈D i∈M log p(z i |x M i ), where z i denotes the visual tokens of the original image, and D the pretraining images. Notice that the number of visual tokens is the same as the number of image patches in this work. Pretraining global representation. Inspired by (Gao & Callan, 2021) , we pretrain the [CLS] token for global image representation. The goal is to mitigate the discrepancy between patch-level pretraining and image-level representation aggregation. As illustrated in Figure 3 , a representation bottleneck is constructed to encourage the [CLS] token to gather information as much as possible. For a L-layer Transformer, let {h l i } N i=1 denote the l-th layer's output vectors, where l ∈ {1, 2, • • • , L}. To pretrain the last layer's [CLS] token h L CLS , we concatenate it with the intermediate l-th layer's patch vectors {h l i } N i=1 , i.e., S = [h L CLS , h l 1 , • • • , h l N ]. We then feed S to a shallow (e.g., two layers) Transformer decoder and conduct masked prediction again, i.e., p(z|S) = softmax z (W c S + b c ). Notice that the parameters are shared for both MIM heads and the MIM loss is also computed at mask positions as in Equation 3. Accordingly, the final training loss is defined as the summation of two terms, i.e., the original loss at the L-th layer, and the shallow Transformer decoder's MIM loss. Overall framework refers to Appendix C. Intuitively, the model favors pushing the global information to h L CLS , because the model tends to fully utilize the parameters from (l + 1)-th layer to L-th layer, to decrease the additional MIM loss. The information-flow bottleneck encourages the [CLS] token towards more reliable global representations than its untrained counterparts. Moreover, the enhanced representations also facilitate various downstream tasks. Notice that the newly added shallow decoder is only used to pretrain the [CLS] token, which is discarded after pretraining.

3. EXPERIMENTS

The pretrained models are evaluated on image classification and semantic segmentation tasks. For image classification, the models are trained on ImageNet-1K (Russakovsky et al., 2015) and evaluated by (1) top-1 accuracy about fine-tuning and (2) top-1 accuracy about linear probing (only fine-tuning the classification head). For semantic segmentation, experiments are conducted on the ADE20K dataset (Zhou et al., 2019) and the performance is evaluated using the mIoU protocol.

3.1. PRETRAINING SETUP

Visual tokenizer training. We instantiate the visual tokenizer of VQ-KD as ViT-B/16 for both base-and large-size BEIT V2 pretraining. The decoder network is a three-layer standard Transformer, which has the same dimension and number of attention heads as the tokenizer encoder. The OpenAI CLIP-B/16 (Radford et al., 2021) is employed as the teacher model and train VQ-KD on ImageNet-1k with 224×224 resolution. Notice that we use the same base-size teacher to train the visual tokenizer for both base-and large-size pretraining. The code size K is set as 8192 and code dimension D as 32 by default. Refer to Appendix D for more training details. Masked image modeling. We follow the settings used in BEiT (Bao et al., 2022) pretraining and use ImageNet-1K without labels as the pretraining data for self-supervised learning. The input image resolution is set as 224x224 during pretraining. The pretrained base-and large-size vision Transformers (Dosovitskiy et al., 2020) with 16×16 patch size are denoted as ViT-B/16 and ViT-L/16, respectively. For the patch aggregation strategy, we set l = 9 for ViT-B/16, l = 21 for ViT-L/16, and the depth as 2 by default. A block-wise masking mechanism is adopted under the mask ratio of 40% (i.e., about 75 image patches). More pretraining details can be found in Appendix E.

3.2. IMAGE CLASSIFICATION

Both the fine-tuning accuracy and linear probing accuracy are evaluated on ImageNet-1k by default. The models are also evaluated on several ImageNet variants to demonstrate their favorable generalization ability. Fine-tuning setup. We follow the protocol proposed in BEiT (Bao et al., 2022) to fine-tune the pretrained BEIT V2 model (see Appendix F for more details). In Table 1 , we report the top-1 fine-tuning accuracy results and compare BEIT V2 with recent MIM methods. From Table 1 , base-size BEIT V2 with a 300-epoch pretraining schedule reaches 85.0% top-1 accuracy, which outperforms BEIT, CAE, SplitMask and PeCo by 2.1%, 1.4%, 1.4% and 0.9% respectively. Compared with masked distillation methods, like MVP, BEIT V2 also shows superiority. Furthermore, with a longer pretraining schedule, BEIT V2 achieves 85.5% top-1 accuracy, developing a new state of the art on ImageNet-1K among self-supervised methods. Meanwhile, BEIT V2 using ViT-L/16 with 300 epochs reaches 86.6% top-1 accuracy, which is comparable to data2vec with 1600 epochs. A longer pretraining schedule further boosts the performance to 87.3%. Following BEIT, we add an intermediate fine-tuning phase between the pretraining stage and the fine-tuning stage. Only the intermediate fine-tuning phase uses the ImageNet-21k dataset. As Table 1 : Fine-tuning results of image classification and semantic segmentation on ImageNet-1K and ADE20k. UperNet (Xiao et al., 2018) is used as the task layer for semantic segmentation with single-scale (512 size) input. 

Methods Linear Probe

BEIT (Bao et al., 2022) 56.7 CAE (Chen et al., 2022) 64.1 MAE (He et al., 2022) 67.8 MVP (Wei et al., 2022) 75.4 MoCo v3 (Chen et al., 2021) 76.7 BEIT V2 (ours) 80.1 Table 3 : Robustness evaluation on three Ima-geNet variants (Hendrycks et al., 2021b; a; Wang et al., 2019) . shown in Table 1 , we find that intermediate fine-tuning achieves about 1% performance gain on image classification for both base-and large-size models. Refer to Appendix B for more results of intermediate fine-tuning. Linear probing. Keeping the backbone model frozen and training a linear classification head atop the image-level representations, linear probing has been a widely considered measure for selfsupervised learning. We average the patch tokens as the global representation for the models without Robustness evaluation. We evaluate the robustness of BEIT V2 on various ImageNet validation sets, i.e., ImageNet-Adversarial (Hendrycks et al., 2021b) , ImageNet-Rendition (Hendrycks et al., 2021a) and ImageNet-Sketch (Wang et al., 2019) . As shown in Table 3 , compared with MAE (He et al., 2022) , BEIT V2 achieves dramatic gains across datasets, demonstrating the superiority of the proposed method in terms of model generalization.

3.3. SEMANTIC SEGMENTATION

Semantic segmentation is a dense prediction task, which generates class label for each pixel of the input image. Following the setting proposed in BEIT (Bao et al., 2022) , we conduct experiments on ADE20K benchmark (Zhou et al., 2019) , which includes 25K mages and 150 semantic categories. We use UperNet (Xiao et al., 2018) task layer and fine-tune the model for 160K iterations with the input resolution 512 × 512. Refer to Appendix G for details. Table 1 shows that BEIT V2 significantly outperforms previous self-supervised methods. Moreover, using the ViT-L/16 model, the performance can reach 56.7, which builds a new state-of-the-art for masked image modeling on ADE20k.

3.4. ANALYSIS

Visual tokenizer training. We investigate the impact of VQ-KD on BEIT V2 in terms of the model architecture and codebook size and report the results in Table 4 . ViT-B/16 without the patch aggregation strategy is used as the baseline model, which is pretrained for 300 epochs. As shown in Table 4 , we find that a deeper decoder of VQ-KD obtains better reconstruction, but lower codebook usage and downstream task performance. Reducing dimension for codebook lookup improves codebook utilization (Yu et al., 2021) . Patch aggregation strategy. Table 5 presents the ablation studies of the patch aggregation strategy. The shallower head (i.e., 1/2-layer) performs better than the deeper head (i.e., 3-layer), suggesting the shallower head pays more attention to the input [CLS] token than the deeper head. Moreover, the proposed method outperforms the baseline variant without patch aggregation strategy. The improvement of linear probe indicates better image-level representations. In addition, the results indicate that sharing the MIM head improves downstream performance. 83.6 -VQ-KD targets. In Table 6 , we report the results about VQ-KDs are trained under the supervision of DINO (Caron et al., 2021) and CLIP (Radford et al., 2021) . DINO is pretrained solely on ImageNet-1k while CLIP is pretrained on 400M image-text pairs datasets in house. We also directly fine-tune the official base-size checkpoints and report the results in Table 6 . One can see that when using DINO as the teacher model, BEIT V2 respectively reaches 84.4% and 49.2% on ImageNet and ADE20k, outperforming DINO itself by a large margin. When using CLIP as the teacher model, BEIT V2 can get consistent improvements, demonstrating the scalability of the proposed VQ-KD. In addition, we directly fine-tune the VQ-KD encoder on ImageNet. The results show that transfer performance of the VQ-KD encoder is lower than the teacher model. After performing masked image modeling, the pretrained model outperforms both the teacher model and the visual tokenizer encoder. It demonstrates the superiority of the proposed method for self-supervised learning. Visualization of codebook. We utilize the proposed VQ-KD to calculate discrete codes about the ImageNet-1k validation set. Image patches are grouped according to their corresponding codes. Figure 4 shows that the grouped image patches represent explicit semantics. For instance, the image patches corresponding to code 7856 are about "eyes" of human, cat, dog, fish and snake. Refer to Appendix A) for more examples. The introduction of codebook and feature quantization reduces the sensitiveness to the change of image details while facilitates exploitation of high-level semantics for representation models. VQ-KD compresses and quantizes the continuous feature values to a codebook, which constructs a discrete semantic space. The dimensionality of such a semantic space is significantly lower than that of the original continuous feature space. This reduces difficulty of masked patch reconstruction and alleviates the curse of dimensionality in the pretraining phase. The datasets (e.g., ImageNet, and ADE20k) are derived from publicly available data buckets. The code can be found in the supplementary materials. We will also provide pretrained checkpoints to reproduce the numbers.



Figure 1: Top-1 fine-tuning accuracy on ImageNet (224 size). Left: ViT-B/16. right: ViT-L/16.

Figure 3: The MIM framework equipped with patch aggregation. The pretraining loss is the summation of L MIM and L c MIM . The loss term L c MIM explicitly encourages the [CLS] token to aggregate patch information to global representations.

Details of VQ-KD training, BEIT V2 pretraining, fine-tuning recipes are given in Appendix D, E, F and G. The models used for VQ-KD training are from the official repositories https:// github.com/facebookresearch/dino and https://github.com/openai/CLIP.

are employed as the backbone networks to obtain image representations. The input image x ∈ R H×W ×C is reshaped to N = HW /P 2 patches ∈ R N ×(P 2 C) and (P, P ) is the patch size. In experiments, each 224 × 224 image is split to a 14 × 14 grid of image patches, where each patch is 16 × 16. The image patches {x p

Ablation studies under VQ-KD settings. "Base&1x768x12" denotes that the encoder network is ViT-Base while the decoder is a Transformer with depth 1, dimensions 768, and head 12. "Reconst. Loss" is the reconstruction loss of VQ-KD. Reconstruction loss and codebook usage are measured on the validation set. After 300 epochs of pretraining, our method reports the top-1 fine-tuning accuracy and linear probing accuracy on ImageNet-1k, and mIoU on ADE20k. The default setting is highlighted in gray . Otherwise, we consider the [CLS] token as the global representation. Table 2 presents the top-1 accuracy for linear probing and compares BEIT V2 with recent methods including BEIT, CAE, MAE, MVP and MoCo v3. All the compared methods are based on ViT-B/16 and pretrained for 300 epochs except MAE for 1600 epochs. BEIT V2 respectively outperforms BEIT, CAE and MVP by 23.4%, 16.0% and 4.7%. BEIT V2 also outperforms MoCo v3, which learns a global representation through a contrastive learning fashion. The comparisons indicate that the representation models learned by BEIT V2 enjoy higher adaptation capability.

Ablation studies for patch aggregation strategy. l-th Layer denotes patch tokens from the l-th layer of the backbone. Head Depth means the patch aggregation head depth. Shared MIM Head means whether we share the MIM head parameters or not. Default settings are in gray .

Comparisons between different VQ-KD targets. We also report the fine-tuning results of VQ-KD target models.

annex

... 

4. RELATED WORK

Visual tokenizer. VQ-VAE (van den Oord et al., 2017) converts an image into a sequence of discrete codes and then reconstructs the input image based on discrete codes. DALL-E (Ramesh et al., 2021) uses the Gumbel-softmax relaxation for quantization instead of the nearest neighbor lookup in VQ-VAE. VQGAN (Esser et al., 2021) and ViT-VQGAN (Yu et al., 2021) introduce Transformer block to train a better autoencoder to maintain fine details with adversarial and perceptual loss. Moreover, ViT-VQGAN proposes factorized and ℓ 2 -normalized code for codebook learning. In comparison, the proposed VQ-KD aims at reconstructing semantic knowledge from the teacher rather than original pixels. So we can construct a highly compact semantic codebook for MIM.Masked image modeling. The MIM method has achieved great success in language task (Devlin et al., 2019) . Motivated by it, BEIT (Bao et al., 2022) mitigated the MIM method to computer vision tasks by recovering discrete visual tokens (Ramesh et al., 2021) . The prediction targets for MIM habe been explored by many recent works. MAE (He et al., 2022) treated MIM as a denoising pixel-level reconstruction task. Knowledge distillation (Wei et al., 2021; 2022) and self-distillation (Zhou et al., 2022; Baevski et al., 2022) proposed to mimic the features provided by the teacher at the masked positions. PeCo (Dong et al., 2021) regarded MoCo v3 (Chen et al., 2021) as the perceptual model in VQGAN training (Esser et al., 2021) , to pursue a better tokenizer for BEIT pretraining. Despite of the progress, most existing studies remain operating on low-level image pixels, this work explores how to promote masked image modeling from pixel-level to semantic-level.

5. CONCLUSION

We proposed vector-quantized knowledge distillation (VQ-KD) to train a visual tokenizer for vision Transformer pretraining. VQ-KD discretized a continuous semantic space that provides supervision for masked image modeling rather than relying on image pixels. The semantic visual tokenizer greatly improved the BEIT pretraining and significantly boosted the transfer performance upon downstream tasks, such as image classification, and semantic segmentation. Moreover, a patch aggregation mechanism was introduced to explicitly encourage the model to produce global image representations, narrowing the gap between the patch-level pretraining and image-level representation aggregation. In the future, we would like to learn a universal tokenizer that projects words and images into the same vocabulary, so that we can conduct masked prediction for vision-language pretraining.

A VISUALIZATION OF CODEBOOK

It is observed that a discrete code tends to represent explicit semantics (Section 3.4). In Figure 5 (upper), we show image examples corresponding to a given discrete code. One can see that discrete codes ignore image details, such as color, illumination, rotation and scale.In the lower part of Figure 5 , we also show some patches that mismatch the semantic concepts. Taking the fish (the first image at the last row) as instance, VQ-KD misclassifies the spot on the fish body as the eye concept due to the local structure similarity. 

B COMPARISON WITH LARGE-SCALE SUPERVISED PRETRAINING

We report the performance by using the ImageNet-1k for pretraining in Table 1 . To show the data scalability of BEIT V2, we conduct intermediate fine-tuning experiments on ImagNet-21k and final fine-tuning on ImageNet-1k, by using the 1600 epoch pretraining models in Table 1 . From Table 7 , BEIT V2 using ViT-L/16 with 384 × 384 input resolution, achieves 89.0% top-1 accuracy, which even outperforms ViT-H/14 using Google JFT-3B labeled dataset by 0.5%. This significant performance gain indicates the data efficiency and superiority of the proposed BEIT V2. 

C OVERALL FRAMEWORK FOR BEIT V2

We show the tokenizer training part and BEIT V2 pretraining part in Figure 2 and Figure 3 , respectively. In addition, we present the whole pretraining process in Figure 6 . 

