IMPROVING SELF-SUPERVISED PRE-TRAINING via A FULLY-EXPLORED MASKED LANGUAGE MODEL

Abstract

Masked Language Model (MLM) framework has been widely adopted for selfsupervised language pre-training. In this paper, we argue that randomly sampled masks in MLM would lead to undesirably large gradient variance. Thus, we theoretically quantify the gradient variance via correlating the gradient covariance with the Hamming distance between two different masks (given a certain text sequence). To reduce the variance due to the sampling of masks, we propose a fully-explored masking strategy, where a text sequence is divided into a certain number of non-overlapping segments. Thereafter, the tokens within one segment are masked for training. We prove, from a theoretical perspective, that the gradients derived from this new masking schema have a smaller variance and can lead to more efficient self-supervised training. We conduct extensive experiments on both continual pre-training and general pre-training from scratch. Empirical results confirm that this new masking strategy can consistently outperform standard random masking. Detailed efficiency analysis and ablation studies further validate the advantages of our fully-explored masking strategy under the MLM framework.

1. INTRODUCTION

Large-scale pre-trained language models have attracted tremendous attention recently due to their impressive empirical performance on a wide variety of NLP tasks. These models typically abstract semantic information from massive unlabeled corpora in a self-supervised manner. Masked language model (MLM) has been widely utilized as the objective for pre-training language models. In the MLM setup, a certain percentage of words within the input sentence are masked out, and the model learns useful semantic information by predicting those missing tokens. Previous work found that the specific masking strategy employed during pre-training plays a vital role in the effectiveness of the MLM framework (Liu et al., 2019; Joshi et al., 2019; Sun et al., 2019) . Specifically, Sun et al. ( 2019) introduce entity-level and phrase-level masking strategies, which incorporate the prior knowledge within a sentence into its masking choice. Moreover, Joshi et al. (2019) propose to mask out random contiguous spans, instead of tokens, since they can serve as more challenging targets for the MLM objective. Although effective, we identify an issue associated with the random sampling procedure of these masking strategies. Concretely, the difficulty of predicting each masked token varies and is highly dependent on the choice of the masking tokens. For example, predicting stop words such as "the" or "a" tends to be easier relative to nouns or rare words. As a result, with the same input sentence, randomly sampling certain input tokens/spans, as a typical masking recipe, will result in undesirable large variance while estimating the gradients. It has been widely demonstrated that large gradient variance typically hurts the training efficiency with stochastic gradient optimization algorithms (Zhang & Xiao, 2019; Xiao & Zhang, 2014; Johnson & Zhang, 2013) . Therefore, we advocate that obtaining gradients with a smaller variance has the potential to enable more sample-efficient learning and thus accelerate the self-supervised learning stage. In this paper, we start by introducing a theoretical framework to quantify the variance while estimating the training gradients. The basic idea is to decompose the total gradient variance into two terms, where the first term is induced by the data sampling process and the second one relates to the sampling procedure of masked tokens. Theoretical analysis on the second variance term demonstrates that it can be minimized by reducing the gradient covariance between two masked sequences. Furthermore, we conduct empirical investigation on the correlation between the gradient's covariance while utilizing two masked sequences for training and the Hamming distance between these sequences. We observed that that the gradients' covariance tends to decrease monotonically w.r.t the sequences' Hamming distance. Inspired by the observations above, we propose a fully-explored masking strategy, which maximizes the Hamming distance between any of two sampled masks on a fixed text sequence. First, a text sequence is randomly divided into multiple non-overlapping segments, where each token (e.g. subword, word or span) belongs to one of them. While the model processes this input, several different training samples are constructed by masking out one of these segments (and leaving the others as the contexts). In this manner, the gradient w.r.t. this input sequence can be calculated by averaging the gradients across multiple training samples (produced by the same input sequence). We further verify, under our theoretical framework, that the gradients obtained with such a scheme tend to have smaller variance, and thus can improve the efficiency of the pre-training process. We evaluate the proposed masking strategies on both continued pre-training (Gururangan et al., 2020) and from-scratch pre-training scenarios. Specifically, Computer Science (CS) and News domain corpus (Gururangan et al., 2020) are leveraged to continually pre-train RoBERTa models, which are then evaluated by fine-tuning on downstream tasks of the corresponding domain. It is demonstrated that the proposed fully-explored masking strategies lead to pre-trained models with stronger generalization ability. Even with only a subset of the pre-training corpus utilized in (Gururangan et al., 2020) , our model consistently outperforms reported baselines across four natural language understanding tasks considered. Besides, we also show the effectiveness of our method on the pre-training of language models from scratch. Moreover, the comparison between fully-explored and standard masking strategies in terms of their impacts on the model learning efficiency further validates the advantages of the proposed method. Extensive ablation studies are further conducted to explore the robustness of the proposed masking scheme.

2. RELATED WORK

Self-supervised Language Pre-training Self-supervised learning has been demonstrated as a powerful paradigm for natural language pre-training in recent years. Significant research efforts have been devoted to improve different aspects of the pre-training recipe, including training objective (Lewis et al., 2019; Clark et al., 2019; Bao et al., 2020; Liu et al., 2019) , architecture design (Yang et al., 2019; He et al., 2020) , the incorporation of external knowledge (Sun et al., 2019; Zhang et al., 2019) , etc. The idea of self-supervised learning has also been extended to generation tasks and achieves great results (Song et al., 2019; Dong et al., 2019) . Although impressive empirical performance has been shown, relatively little attention has been paid to the efficiency of the pre-training stage. ELECTRA (Clark et al., 2019) introduced a discriminative objective that is defined over all input tokens. Besides, it has been showed that incorporating language structures (Wang et al., 2019) or external knowledge (Sun et al., 2019; Zhang et al., 2019) into pre-training could also help the language models to better abstract useful information from unlabeled samples. In this work, we approach the training efficiency issue from a different perspective, and argue that the masking strategies, as an essential component within the MLM framework, plays a vital role especially in efficient pre-training. Notably, our fully-explored masking strategies can be easily combined with different model architectures for MLM training. Moreover, the proposed approach can be flexibly integrated with various tokenization choices, such as subword, word or span (Joshi et al., 2019) . A concurrent work Chen et al. ( 2020) also shared similar motivation as this work, although they have a different solution and their method requires additional computation to generate the masks, and yet is outperformed by the proposed fully-explored masking (see Table 2 ).

Domain-specific Continual Pre-training

The models mentioned above typically abstract semantic information from massive, heterogeneous corpora. Consequently, these models are not tailored to any specific domains, which tends to be suboptimal if there is a domain of interest beforehand. Gururangan et al. (2020) showed that continual pre-training (on top of general-purpose LMs) with in-domain unlabeled data could bring further gains to downstream tasks (of that particular domain). One challenge inherent in continual pre-training is that in-domain data are usually much more limited, compared to domain-invariant corpora. As a result, how to efficiently digest information from unlabeled corpus is especially critical while adapting large pre-trained language models to specific

