TAILORING LANGUAGE GENERATION MODELS UNDER TOTAL VARIATION DISTANCE

Abstract

The standard paradigm of neural language generation adopts maximum likelihood estimation (MLE) as the optimizing method. From a distributional view, MLE in fact minimizes the Kullback-Leibler divergence (KLD) between the distribution of the real data and that of the model. However, this approach forces the model to distribute non-zero (sometimes large) probability mass to all training samples regardless of their quality. Moreover, in the attempt to cover the low-probability regions in the data distribution, the model systematically overestimates the probability of corrupted text sequences, which we conjecture is one of the main reasons for text degeneration during autoregressive decoding. To remedy this problem, we leverage the total variation distance (TVD) with its robustness to outliers, and develop practical bounds to apply it to language generation. Then, we introduce the TaiLr 1 objective that balances the tradeoff of estimating TVD. Intuitively, TaiLr downweights real data samples that have low model probabilities with tunable penalization intensity. Experimental results show that our method alleviates the overestimation of degenerated sequences without sacrificing diversity and improves generation quality on a wide range of text generation tasks.

1. INTRODUCTION

The dominant approach to train language generation models is to maximize the likelihood of text samples in training data. With the development of pre-training techniques, the quality of texts generated by current models has been improved by a large margin (Radford et al., 2019; Brown et al., 2020) . However, the text degeneration phenomena, e.g., repetitions (Holtzman et al., 2020; Welleck et al., 2020) , incoherence (Guan et al., 2021; Ji & Huang, 2021) , and other ill-formed generation results sampled from the noisy long tail (Dou et al., 2022; LeBrun et al., 2022) , are still widely observed in large pre-trained models. These results indicate that using MLE as the optimizing method has theoretical limitations that are hard to be compensated by increasing the model size. Given the real data distribution p(x) and the model distribution q(x) defined by a learned generation model, we can view MLE as minimizing the KLD between p(x) and q(x). However, minimizing D KL (p, q) will lead to a zero-avoiding solution of q(x) that spreads itself to cover all the modes in the real data (Minka, 2005; Malinin & Gales, 2019) . As the model is forced to take into account all the modes regardless of their quality and saliency, this behavior could deteriorate the overall generation quality when (i) the data inherently exhibits too many variations, e.g., in open-ended generation, the model often over-presents unrelated words in the unreliable long tail of its distribution (Holtzman et al., 2020) . (ii) the data contains flawed or noisy references, e.g., hallucination and missing contents in text summarization (Zhao et al., 2020) degrade the generation quality of the model. In language generation, the attempt to cover all the non-zero probability regions in the data distribution would lead to a problem directly related to text degeneration, which we term as data void overestimation. Concretely, the model assigns considerably more probability mass than it should to the void of the real data distribution, where degenerated text sequences lie. An intuitive illustration is shown in Figure 1 where KLD pushes the model to place large mass on the zero-probability region of the target distribution to cover the minor mass portion on the right. These degenerated texts include random word sequences and partially corrupted texts that have high lexical overlap with the real texts. Therefore, during free-run generation, the model is likely to trap into the void regions and produce "over-generalized" text samples that are unlike the training data (Huszar, 2015) . In this work, we start with a robust alternative to KL divergence, i.e., the total variation distance (TVD). TVD is known to be robust outliers in the data (Beran, 1977; Knoblauch & Vomfell, 2020) , as it measures the absolute difference between two probability distributions averaging at each point. In §2.2, we show that TVD allows the model to place zero probability to low-quality training samples and prevent overestimation of the data void region through gradient analysis. Though appealing, TVD cannot be directly applied to text generation because (i) TVD measures the distance at the sequence level while we desire a token-level criterion for autoregressive generation models, (ii) we only have samples from the data distribution, whereas calculating TVD demands the real data probability p(x) of the training sample x. We overcome these two issues by (i) developing an upper bound on the sequence-level TVD with its token-level factorization ( §3.1), and (ii) introducing a proxy distribution ( §3.2) that handles the bias-variance tradeoff during estimating TVD ( §3.3). Finally, we derive the Total Variation Guided Language Generation (TaiLr) objective by leveraging access to the non-zero gradient of TVD to guide the model. Intuitively, TaiLr weights the loglikelihood of a text sequence at each position according to the model probability and uses a tunable hyperparameter to control the penalization intensity. We first conduct experiments on synthetic data to show that TaiLr achieves better generation quality without sacrificing diversity and reduces the overestimation of degenerated texts compared to MLE. Further experiments on real data demonstrate that the proposed method outperforms existing methods that modify MLE at different aspects on a wide range of language generation tasks, including machine translation, text summarization, and long text generation.

2. BACKGROUND AND MOTIVATION

We consider natural language generation tasks where a conditional generation model p θ (y|x) parametrized by θ is required to generate the target text sequence y = (y 1 , • • • , y T ) given the context x. Let p o (y|x) denote the real data distribution, MLE training is equivalent to minimizing the KL divergence between p o and p θ : D KL (p o , p θ ) = -E y∼po T t=1 log p θ (y t |y <t , x) -H(p o ), where the generation probability is factorized into the product of conditional token probabilities given the prefix y <t and the context x: p θ (y|x) = T t=1 p θ (y t |y <t , x). The first term pushes the model to minimize the negative log-likelihood (NLL) of the training data. The second term is a constant with respect to θ and therefore is commonly ignored in MLE. Despite its simplicity and practical benefits for optimization, MLE is known to suffer from a mismatch to the evaluation metric (Pang & He, 2021) and brittleness to noise in the training data (Kang & Hashimoto, 2020) . Motivated by the literature in probability metrics, we draw attention to total variation distance (TVD) as a naturally robust alternative to KLD. We present the definitions of TVD (Van Handel, 2014)  where Y is the space of all possible text sequences. Intuitively, TVD measures the average of the absolute difference between p o (y|x) and p θ (y|x) on all possible text sequence y ∈ Y. Therefore



between the data distribution p o and the model distribution p θ given the context x:D TV (p o , p θ ) = 1 2 y∈Y p o (y|x)p θ (y|x) (2a) = 1 -y∈Y min p o (y|x), p θ (y|x) ,

