DYNAMIC SCHEDULED SAMPLING WITH IMITATION LOSS FOR NEURAL TEXT GENERATION Anonymous authors Paper under double-blind review

Abstract

State-of-the-art neural text generation models are typically trained to maximize the likelihood of each token in the ground-truth sequence conditioned on the previous target tokens. However, during inference, the model needs to make a prediction conditioned on the tokens generated by itself. This train-test discrepancy is referred to as exposure bias. Scheduled sampling is a curriculum learning strategy that gradually exposes the model to its own predictions during training to mitigate this bias. Most of the proposed approaches design a scheduler based on training steps, which generally requires careful tuning depending on the training setup. In this work, we introduce Dynamic Scheduled Sampling with Imitation Loss (DYSI), which maintains the schedule based solely on the training time accuracy, while enhancing the curriculum learning by introducing an imitation loss, which attempts to make the behavior of the decoder indistinguishable from the behavior of a teacher-forced decoder. DYSI is universally applicable across training setups with minimal tuning. Extensive experiments and analysis show that DYSI not only achieves notable improvements on standard machine translation benchmarks, but also significantly improves the robustness of other text generation models.

1. INTRODUCTION

Advances in deep learning have led to great achievements in neural text generation tasks including machine translation (Vaswani et al., 2017; Wu et al., 2019) , summarization (Zhang et al., 2019a; Lewis et al., 2020) and language modeling (Radford et al., 2019; Brown et al., 2020) . The dominant approach to date generates the output sequence with a decoder in an autoregressive manner (Bahdanau et al., 2014; Vaswani et al., 2017) . To realize the autoregressive formulation, most of the text generation models are trained to maximize the likelihood of each token in the ground-truth sequence conditioned on the previous target tokens with Maximum Likelihood Estimation (MLE). In particular, Teacher Forcing (Williams & Zipser, 1989) has been the de facto strategy to help stabilize and speed up the training, where the decoder takes the ground-truth token from the previous time step as the conditioning input for generating the next token. At inference time, however, the decoder does not have access to the previous ground-truth tokens when it is predicting the next token. Thus, the decoder has to instead make a prediction conditioned on the tokens generated by itself so far, resulting in a train-test discrepancy, often referred to as exposure bias (Bengio et al., 2015) . This discrepancy can lead to error accumulation over time steps as the model might encounter unexpected (though not necessarily wrong) tokens that it has never been exposed to during training. The methods proposed to combat exposure bias can be categorized into two groups: Non-MLE-based approaches (Goyal et al., 2016; Yu et al., 2017; Lin et al., 2017; Nie et al., 2019) and MLE-based approaches (Bengio et al., 2015; Song et al., 2021; Liu et al., 2021b) . Most non-MLE-based approaches take advantage of generative adversarial networks (GAN) (Goodfellow et al., 2014) and/or reinforcement learning methods to avoid teacher forcing. However, the advantages of these approaches often come with the price of training instability and difficulty, and empirically they still struggle to outperform the MLE baseline (He et al., 2021) . On the other hand, MLE-based approaches typically apply curriculum learning (Bengio et al., 2009) strategy to gently bridge the gap between training and inference. These methods often consist of a scheduler, e.g., based on training steps, which controls the extent to which the model should be exposed to its own predictions during training. Intuitively, the model should be exposed to more of its own outputs as the training proceeds. MLE-based approaches are inherently more efficient and parallelizable as the models do not need to generate the full sequence in inference mode to compute the training loss. Also, MLE has been the mainstream method for training deep neural models. Our work in this paper thus concerns MLE-based training. Bengio et al. (2015) propose scheduled sampling to alleviate exposure bias, where the decoder uses the ground-truth previous token as input with probability ϵ, and uses its own prediction with probability (1 -ϵ). 2021b) propose to use a scheduler based on both training and decoding steps. Intuitively, the later decoding steps usually have higher error rates during inference due to error accumulation. Therefore, the model output should be sampled as input with a higher chance for the later decoding steps during training. As discussed, MLE-based approaches maintain a schedule to decide how much the model should be exposed to its own predictions, which often needs a proxy to estimate the training progress. A schedule (linear or nonlinear) based on training steps usually requires careful design for the specific problem setup as different batch sizes may lead to different training speeds and different tasks may have different convergence rates. This limits the applicability of these approaches to new tasks and datasets. In this work, we introduce Dynamic Scheduled Sampling with Imitation loss (DYSI). First, we propose a scheduler that solely depends on training time accuracy. By tracking training progress though accuracy, we avoid having to perform a costly heuristic search to find a suitable scheduler for each different problem setup. In addition, we use an imitation loss to enforce the condition that the generative behavior should match teacher-forced behavior as closely as possible, a core idea in professor forcing (Goyal et al., 2016) . Our imitation loss uses the decoder in teacher-forcing mode as the expert to regularize/guide the decoder's behavior when it takes self-generated tokens as input. We first conduct experiments on machine translation (MT) to demonstrate how our approach performs in various aspects such as generalization and degeneration. Results show that training with DYSI achieves notable improvements on standard MT benchmarks. We then introduce a novel framework for evaluating the robustness of a language model (LM) when exposed to perturbed data, using auto-completion as a test bed. We find that current pre-trained LMs, when trained with standard teacher forcing, are quite sensitive to erroneous contexts that are typical for LM generation such as repetitions. Analysis shows DYSI, by reducing exposure bias, yields a significantly more robust LM across various kinds of perturbations, and overall produces better quality text.

2. BACKGROUND

Text generation. Typical neural text generation models use an autoregressive factorization of the joint probability over the target sequence. An autoregressive decoder trained with maximum likelihood estimation (MLE) learns to assign a probability to a target sequence y = (y 1 , • • • , y T ) containing T tokens by factorizing the joint probability using the chain rule: (1) L MLE = - where x is a source input for conditional text generation (e.g., machine translation) and ∅ for unconditional generation (e.g., language modeling), and y <t = (y 1 , . . . , y t-1 ) denotes tokens before the current step t. To train autoregressive models, teacher forcing (Williams & Zipser, 1989 ) is commonly used for faster convergence and training stability. In this method, ground-truth tokens from the previous steps are used as input to predict the current token y t . However, it also causes the train-test discrepancy or exposure bias as the target tokens are not available at inference time.



The probability ϵ is controlled by a scheduler to decay based on the training steps. Such a curriculum learning strategy allows the model to use ground-truth previous tokens at the initial stage of the training and gradually exposes the model to more and more of its own predictions. Zhang et al. (2019b) modify the scheduled sampling of Bengio et al. (2015) by allowing the model to sample from a set of oracle tokens (e.g., synonym of the target token) as the previous token to simulate the model's output at inference time. Goodman et al. (2020) use a stack of N temporal decoders trained to decode along a secondary time axis that allows updating model parameters based on N prediction steps with N being a hyper-parameter. Each decoder gets a predicted previous token as input from its prior decoder, but the training cost increases linearly with N . Song et al. (2021) incorporate an error correction mechanism with two decoders. A query stream decoder having access to only positional information first predicts intermediate results, which is then corrected by a content stream decoder. The inference requires running through both decoders, which lowers the decoding efficiency. More recently, Liu et al. (

y t |y <t , x).

