UNIFIED DETOXIFYING AND DEBIASING IN LANGUAGE GENERATION VIA INFERENCE-TIME ADAPTIVE OPTI-MIZATION

Abstract

Warning: this paper contains model outputs exhibiting offensiveness and biases. Recently pre-trained language models (PLMs) have prospered in various natural language generation (NLG) tasks due to their ability to generate fairly fluent text. Nevertheless, these models are observed to capture and reproduce harmful contents in training corpora, typically toxic language and social biases, raising severe moral issues. Prior works on ethical NLG tackle detoxifying and debiasing separately, which is problematic since we find debiased models still exhibit toxicity while detoxified ones even exacerbate social biases. To address such a challenge, we propose the first unified framework of detoxifying and debiasing called UD-DIA, which jointly formalizes these two problems as rectifying the output space. We theoretically interpret our framework as learning a text distribution mixing weighted attributes. Besides, UDDIA conducts adaptive optimization of only a few parameters during decoding based on a parameter-efficient tuning schema without any training data. This leads to minimal generation quality loss and improved rectification performance with acceptable computational cost. Experimental results demonstrate that compared to several strong baselines, UDDIA achieves debiasing and detoxifying simultaneously and better balances efficiency and effectiveness, taking a further step towards practical ethical NLG.

1. INTRODUCTION

Transformer (Vaswani et al., 2017) based Pre-trained Language Models (PLMs) (Radford et al., 2019; Raffel et al., 2019; Lewis et al., 2020) could produce quite fluent text and have empowered a wide range of downstream Natural Language Generation (NLG) tasks (See et al., 2019; Zhang et al., 2020; Lewis et al., 2020) . However, these PLMs are observed to internalize, propagate, and even amplify problematic contents that exist in crawled unclean corpora, typically toxic language (e.g., offensive text) (Gehman et al., 2020) and social biases (e.g., stereotypes or different model predictions) towards particular demographic groups (e.g., gender and race) (Sheng et al., 2019) , as shown in Figure 1-(a) . As large PLMs are becoming the foundation of the rapidly-growing NLG services (Bommasani et al., 2021) that directly interact with end-users, such pernicious text would propagate misrepresentations (known as representational harms), aggravate inequality of opportunities (Blodgett et al., 2020) , and cause psychological or even material harms (Weidinger et al., 2021) , bringing a profound negative impact on society. Moreover, such issues are found to persist across increasing model sizes (Rae et al., 2021) , emphasizing the urgency of developing practical methods for ethical NLG. These problems have drawn much attention to developing detoxifying and debiasing techniques, and previous methods mainly fall into two paradigms. The first is domain-specific pretraining (Gururangan et al., 2020) , which further trains the model with clean (e.g., non-toxic) corpora (Wang et al., 2022) .

