SYSTEMATIC RECTIFICATION OF LANGUAGE MODELS VIA DEAD-END ANALYSIS

Abstract

With adversarial or otherwise normal prompts, existing large language models (LLM) can be pushed to generate toxic discourses. One way to reduce the risk of LLMs generating undesired discourses is to alter the training of the LLM. This can be very restrictive due to demanding computation requirements. Other methods rely on rule-based or prompt-based token elimination, which are limited as they dismiss future tokens and the overall meaning of the complete discourse. Here, we center detoxification on the probability that the finished discourse is ultimately considered toxic. That is, at each point, we advise against token selections proportional to how likely a finished text from this point will be toxic. To this end, we formally extend the dead-end theory from the recent reinforcement learning (RL) literature to also cover uncertain outcomes. Our approach, called rectification, utilizes a separate but significantly smaller model for detoxification, which can be applied to diverse LLMs as long as they share the same vocabulary. Importantly, our method does not require access to the internal representations of the LLM, but only the token probability distribution at each decoding step. This is crucial as many LLMs today are hosted in servers and only accessible through APIs. When applied to various LLMs, including GPT-3, our approach significantly improves the generated discourse compared to the base LLMs and other techniques in terms of both the overall language and detoxification performance.

1. INTRODUCTION

Large-scale Transformer-based (Vaswani et al., 2017) language models (LMs) have shown tremendous progress and grown in importance across various NLP downstream tasks, often providing stateof-the-art performances over the last few years (Devlin et al., 2019; Yang et al., 2019; Raffel et al., 2020; Peters et al., 2018) . Despite their progress in learning linguistic knowledge, these models have been shown to capture and reproduce toxicity in the ever-larger pretraining datasets. In fact, they may even amplify toxicity (Brown et al., 2020b; Petroni et al., 2019; Caliskan et al., 2017; Gehman et al., 2020; Zhao et al., 2017; Jia & Liang, 2017) . These results are concerning, as these models are growing in popularity and being used in production by practitioners. Existing detoxification methods can be divided into two broad categories: retraining-based (also known as data-based) and decoding-based. Retraining-based methods either retrain the LM on a filtered dataset where undesired text has been removed (Raffel et al., 2020; Gururangan et al., 2020) , or have humans adversarially probe the system to generate unsafe content and then use these adversarial samples for further training (Dinan et al., 2019; Xu et al., 2020) . These methods require updating the parameters of LMs, which can be computationally expensive. Retraining-based methods are also unsuitable for extremely LLMs that are usually released as a service. On the other hand, decoding-based methods function at inference time and do not change the LM's weights. Examples include Plug and Play Language Models (PPLM; Dathathri et al. ( 2020)), word-filtering (Gehman et al., 2020) , test-time filtering (Welbl et al., 2021) and the Self-Debiasing method of Schick et al. ( 2021), which can be viewed as prompt-based token elimination. However, these methods neither foresee that the discourse may become toxic even if the current choice of the token is not harmful, nor can they correct the seemingly toxic discourse later on. This work proposes a systematic approach, called rectification, to mitigate toxicity for LLMs. We extend the dead-end theory of Fatemi et al. (2019; 2021) from the recent reinforcement learning (RL) literature and frame the detoxification task as an auxiliary RL problem separate from LM training. The core idea is that, during text generation, if a token causes the eventual discourse to be toxic with some level of certainty, then the probability of choosing that token should be reduced with the same level of certainty. Building on our formal results, we construct a simple RL problem, whose value function is used to estimate an upper bound on the level of certainty. At inference time, we use the learned upper bound to truncate the target policy (i.e., LM). There are three essential aspects of rectification that we should highlight: (I) there is no need to modify the LM's parameters, (II) the rectification model can be, in general, significantly smaller (hence easier to train) than the LM, and (III) one rectification model can be used to detoxify various LMs as long as they share the same vocabulary. We evaluate our method on the REALTOXICI-TYPROMPTS benchmark. We demonstrate that our method can substantially mitigate toxicity using both automatic and human evaluation. Compared with the regular GPT-2 XL, our method yields a relative reduction in toxicity probability by 78% (83.2% → 18.5%, as measured by PERSPECTIVE API), and it outperforms eight detoxification baselines. 2

2. RELATED WORK

Studying and detecting toxic text generated by large pre-trained LMs have grown in importance over the past few years (Gehman et al., 2020; Xu et al., 2021; Welbl et al., 2021) . However, there are many challenges when studying the toxicity of LM: First, there are different types of toxic content, such as profanity, identity attack, threat, etc. Depending on the context, they may require different treatment. Second, there is no widely accepted definition of toxicity for LMs, as individual perceptions may vary due to different social backgrounds (Zampieri et al., 2019; Weng, 2021) . In this work, we define toxic content as "rude, disrespectful, and unreasonable language", following prior work on LM toxicity (Gehman et al., 2020) . LMs trained on large corpora suffer from generating toxic content. For instance, it has recently been shown that LMs can generate racist continuations conditioned on either synthetic or innocuous prompts (Wallace et al., 2019; Gehman et al., 2020) . Roller et al. ( 2021) study toxic LMs within the scope of dialogue systems. Xu et al. (2021) demonstrate that LMs can also amplify social biases. Reducing toxicity is of utmost importance as it will be passed on to downstream automated products and applications. Such biases and toxicities may cause harms (e.g., of allocation or of representation) to the underrepresented groups (Barocas et al., 2017; Crawford, 2017; Dixon et al., 2018; Xu et al., 2021; Welbl et al., 2021) . 2021) uses a toxic LM as an "anti-expert" and a non-toxic LM as an "expert" to boost the probability of non-toxic tokens.



FORMAL METHODSIn order to delineate the language detoxification problem in a precise manner, we revisit and build from the basic ideas of dead-end theory(Fatemi et al., 2019; 2021). Here, we first describe pre-2 https://github.com/mcao516/rectification-lm.git



To alleviate the issue of toxicity in LMs, multiple detoxification techniques have been proposed. Retraining-based methods like Raffel et al. (2020); Gururangan et al. (2020); Dinan et al. (2019); Xu et al. (2020); Lu et al. (2022) fine-tune the LM on a filtered corpus or adversarial dataset. These methods become impracticable when the target LM is extremely large. PPLM (Dathathri et al., 2020) controls the generation direction using the gradient of a simple discriminator. Given a differentiable toxicity classifier, PPLM can steer the LM away from generating toxic text. However, PPLM is known to be computationally expensive and slow Gehman et al. (2020); Yang & Klein (2021). Self-Debiasing (SD) (Schick et al., 2021) is a prompt-based detoxification method. It scales down the probability of the tokens generated under hand-crafted prompts that explicitly command the LM to generate a toxic continuation. Similarly, Liu et al. (

