A DISTRIBUTIONAL APPROACH TO CONTROLLED TEXT GENERATION

Abstract

We propose a Distributional Approach for addressing Controlled Text Generation from pre-trained Language Models (LMs). This approach permits to specify, in a single formal framework, both "pointwise'" and "distributional" constraints over the target LM -to our knowledge, the first model with such generalitywhile minimizing KL divergence from the initial LM distribution. The optimal target distribution is then uniquely determined as an explicit EBM (Energy-Based Model) representation. From that optimal representation we then train a target controlled Autoregressive LM through an adaptive distributional variant of Policy Gradient. We conduct a first set of experiments over pointwise constraints showing the advantages of our approach over a set of baselines, in terms of obtaining a controlled LM balancing constraint satisfaction with divergence from the initial LM. We then perform experiments over distributional constraints, a unique feature of our approach, demonstrating its potential as a remedy to the problem of Bias in Language Models. Through an ablation study, we show the effectiveness of our adaptive technique for obtaining faster convergence.

1. INTRODUCTION

Neural language models, such as GPT-2/3 (Radford et al., 2019; Brown et al., 2020a) , pretrained on huge amounts of text, have become pre-eminent in NLP, producing texts of unprecedented quality. In this paper, we are concerned with the problem of controlling a generic pretrained LM in order to satisfy certain desiderata. For instance, we may want to avoid toxic content; prevent certain demographic biases; or steer generations towards a certain topic or style. Prior work, taking inspiration from Reinforcement Learning (RL), has aimed at inducing autoregressive models to optimize global objectives using task specific rewards such as BLEU and ROUGE for Machine Translation and Summarization (Ranzato et al., 2016; Bahdanau et al., 2017) , or hand crafted rewards (Li et al., 2016b; Tambwekar et al., 2019) to improve certain a priori desirable features. However, such an optimization process is not infallible; Liu et al. (2016a) noted that it often leads to "degeneration", producing poor examples that improve the average reward but forgo coherence and fluency. This degeneration is often diagnosed as an effect of deviating too much from the original pretrained LM during optimization. Consequently, prior work has regarded proximity to the pretrained model as a prescription for sample quality. This view is most prominent in open-domain generation where no gold references are available for fine-tuning, making the pretrained LM itself the yardstick for fluency. Jaques et al. (2017); Ziegler et al. (2019) propose a conservative fine-tuning approach moderated by a KL penalty between the trained policy and the original LM, discouraging large deviations. A KL penalty was also used by Dathathri et al. (2020) , this time in a plug-and-play rather than a fine-tuning context. However, the authors show that balancing policy deviations from the original LM while also satisfying the control conditions is delicate. To combat degeneration they had to combine the KL penalty with post-norm fusion, reranking, and early-stopping procedures. Most of the existing work on Controlled Generation has taken what we refer to as a "pointwise" view, namely focusing on the quality of each individual output, a view that is encouraged by the standard RL goal of maximizing rewards computed at the individual level. Such techniques are incapable of enforcing "distributional" conditions, where some collective statistical properties are desired over the set of all generations. Distributional control is key to solving the problem of social biases in LMs trained on large, uncurated Web corpora. Those LMs -dubbed "Stochastic Parrots" in (Bender et al., 2021) -tend to encode hegemonic biases that are harmful to marginalized populations. There has been a large body of work analysing these distributional biases (Blodgett et al., 2020; Stanovsky et al., 2019; Prates et al., 2020; Sheng et al., 2019a; Brown et al., 2020b) . However, applying distributional control on pretrained models is still an understudied problem. Sheng et al. ( 2020) introduce a method relying on adversarial triggers (Wallace et al., 2019) ; this method does not de-bias the whole distribution but only obtains non-biased continuations of given prompts. Bordia & Bowman (2019) introduce a regularization term for reducing gender bias when training a language model from scratch (as opposed to de-biasing a pretrained model). 2In this work, we present our Generation with Distributional Control (GDC) approach, in which we formalize the problem of controlled text generation as a constraint satisfaction problem over the probability distribution p representing the desired target LM. Namely, we require the expectations ("moments") relative to p of certain output features to have specific values; this permits for instance to condition all outputs to speak about sports (a pointwise constraint), and 50% of them to mention female characters (a distributional constraint). Additionally, we require p to have a minimal KL divergence D KL (p, a) from the original pretrained LM a. This has the effect that p now inherits favorable linguistic qualities from a. As we will explain, this formulation is a generalization of the Maximum Entropy Principle and leads to a unique solution P (x). P (x) is an unnormalized distribution, aka an Energy-Based Model (EBM) (Hinton, 2002; LeCun et al., 2006; Bakhtin et al., 2020) , of which p(x) = 1/Z P (x) is the normalized version, where Z . = x P (x) is the partition function of P . Computing the EBM representation P is a crucial step, as it fully determines the optimal distribution p we are looking for. However, it is not the end of the story, because the representation thus obtained does not enable us to directly sample from p, an essential property of any LM. 3 To this end, we introduce KL-adaptive DPG (Distributional Policy Gradient), a variant of an algorithm recently proposed in (Parshakova et al., 2019b) . We train the policy π θ to approximate p in an adaptive way, by speeding up the next round of approximations based on approximations previously obtained. At the end of this process, we obtain a final π θ , our target LM, on which we can estimate diverse metrics, including D KL (p, π θ ), measuring the approximation quality of π θ relative to the optimal p, and D KL (π θ , a), measuring the divergence of π θ relative to the original LM a. This two-step approach differs from much research in NLP-oriented work with EBMs, which tends to use EBM representations inside the training loops of neural networks, blurring different dimensions of the problem. By contrast -similarly to Parshakova et al. (2019a; b) in a different context -we clearly decouple the relatively simple problem of determining a "pivot" optimal EBM from the more difficult problem of exploiting this EBM at inference time, Such decoupling is valuable, because it permits to better diagnose the important challenges to focus on. Overall, our contributions can be summarized as follows: 1. We introduce a Distributional View for controlled text generation formalized as a constraint satisfaction problem combined with a divergence minimization objective, providing a single framework both for "distributional" constraints (collective statistical requirements) and for "pointwise" constraints (hard requirements on each individual) ( §2.1). To our knowledge, this is the first framework with such generality for controlled text generation. 2. We show how these constraints lead to an optimal EBM for the target model ( §2.2), propose the KL-Adaptive DPG algorithm for approximating the optimal EBM distribution by



Additional Related Work is provided in §E. We use §A, §B ... to refer to sections in the Appendix. One possible sampling approach here would be to employ MCMC techniques, such asMetropolis- Hastings (Robert & Casella, 2005). These come with theoretical convergence guarantees in the limit but in practice convergence can be very difficult to assess, and furthermore, obtaining samples can be extremely slow.

funding

† Work done during an internship at NAVER Labs Europe.

