ITERATIVE TASK-ADAPTIVE PRETRAINING FOR UNSU-PERVISED WORD ALIGNMENT

Abstract

How to establish a closer relationship between pre-training and downstream task is a valuable question. We argue that task-adaptive pretraining should not be just performed before task. For word alignment task, we propose an iterative selfsupervised task-adaptive pretraining paradigm, tying together word alignment and self-supervised pretraining by code-switching data augmentation. When we get the aligned pairs predicted by the multilingual contextualized word embeddings, we employ these pairs and origin parallel sentences to synthesize code-switched sentences. Then multilingual models will be continuously finetuned on the augmented code-switched dataset. Finally, finetuned models will be used to produce new aligned pairs. This process will be executed iteratively. Our paradigm is suitable for almost all unsupervised word alignment methods based on multilingual pre-trained LMs and doesn't need gold labeled data, extra parallel data or any other external resources. Experimental results on six language pairs demonstrate that our paradigm can consistently improve baseline method. Compared to resource-rich languages, the improvements on relatively low-resource or different morphological languages are more significant. For example, the AER scores of three different alignment methods based on XLM-R are reduced by about 4 ∼ 5 percentage points on language pair En-Hi.

1. INTRODUCTION

Although pre-trained language models (PTLMs) (Devlin et al., 2019b; Conneau et al., 2020) trained with massive textual and computational resources have achieved high performance in natural language processing tasks, there can be a distributional mismatch between the pretraining and target domain corpora. To tackle domain discrepancies, domain-adaptive pretraining with a large corpus in the domain of the downstream task is usefully employed, such as BioBERT (Lee et al., 2020) . However, this approach requires large corpora in the target domain and entails a high computational cost. Gururangan et al. (2020) propose task-adaptive pretraining and explore the benefits of continued pretraining on data from the task distribution. There are also others works (Gu et al., 2020; Karouzos et al., 2021; Nishida et al., 2021) focusing on establishing a closer relationship between pre-training and downstream task. For example, Gu et al. ( 2020) add a task-guided pre-training stage with selective masking between general pre-training and fine-tuning. Karouzos et al. ( 2021) simultaneously minimize a task-specific loss on the source data and a language modeling loss on the target data during fine-tuning. However, these methods generally follow a fixed paradigm: task-adaptive pretraining then task training. There is an obvious lack of interactive feedback. Can the output of the task can be used to improve pretraining? See Figure 1 . And we find that an iterative self-supervised task-adaptive pretraining paradigm can be designed for unsupervised word alignment tasks. In the following, we give a detailed introduction about how to design our new paradigm. Continued pretraining of a LM on the unlabeled data of a given task (task-adaptive pretraining) (Gururangan et al., 2020) has been shown to be beneficial for task performance. And we think that simply pre-training LMs with MLM or TLM on monolingual parallel sentences is obviously not closely integrated with word alignment task. Based on the assumption that a closer interaction between task pre-training and the task itself can improve performance, we propose an iterative self-supervised continued pretraining paradigm, constantly pushing pre-trained LMs toward the word 1

