KNOWLEDGE UNLEARNING FOR MITIGATING PRIVACY RISKS IN LANGUAGE MODELS

Abstract

Pretrained Language Models (LMs) memorize a vast amount of knowledge during initial pretraining, including information that may violate the privacy of personal lives and identities. Previous work addressing privacy issues for language models has mostly focused on data preprocessing and differential privacy methods, both requiring re-training the underlying LM. We propose knowledge unlearning as an alternative method to reduce privacy risks for LMs post hoc. We show that simply applying the unlikelihood training objective to target token sequences is effective at forgetting them with little to no degradation of general language modeling performances for larger LMs; it sometimes even substantially improves the underlying LM with just a few iterations. We also find that sequential unlearning is better than trying to unlearn all the data at once and that unlearning is highly dependent on which kind of data (domain) is forgotten. By showing comparisons with a previous data preprocessing method and decoding method known to mitigate privacy risks for LMs, we show that unlearning can give a strong empirical privacy guarantee in scenarios where the data vulnerable to extraction attacks are known a priori while being orders of magnitude more computationally efficient and robust. We release the code and dataset needed to replicate our results at http://www.omitted.link/.

1. INTRODUCTION

Recent work has shown that an adversary can extract training data from Pretrained Language Models (LMs) including Personally Identifiable Information (PII) such as names, phone numbers, and email addresses, and other information such as licensed code, private clinical notes, and 128-bit UUIDs (Carlini et al., 2021; Lee et al., 2022; Huang et al., 2022; Lehman et al., 2021) . In 2021, an AI chatbot Iruda became the first AI system to be sued for violating the Personal Information Protection Act after generating the exact home addresses and bank account numbers of actual individuals unintentionally (Park, 2021) . Heikkilä (2022) has also shown that GPT-3 (Brown et al., 2020) , one of the most well known LM currently in commercial use, offered detailed private information about the Editor-in-Chief of MIT Technology Review including his family members, work address, and phone number. Considering findings that show extracting training data gets easier as LMs scale to larger sizes (Carlini et al., 2022a ) and that it is common practice for practitioners to release billion parameter pretrained LMs for public use (Gao et al., 2020; Black et al., 2021; Zhang et al., 2022) , it has become important to provide privacy guarantees for large LMs. Practitioners are required to delete personal information from the LMs by individuals' request because each individual has the "Right To Be Forgotten (RTBF)" (Mantelero, 2013; Graves et al., 2021) and can limit the direct and indirect commercial use of their personal information (Villaronga et al., 2018) . Previous methods addressing privacy risks for language models attempt to remove all private information from the training data (data preprocessing) (Aura et al., 2006; Dernoncourt et al., 2017; Lison et al., 2021; Kandpal et al., 2022) or attempt to design algorithms that ensure differential privacy (DP) (Dwork, 2008; Dwork et al., 2006; Abadi et al., 2016; Anil et al., 2021; Li et al., 2022; Yu et al., 2022) . Both approaches require retraining the underlying LM every time individuals want to practice their RTBF, which makess them inadequate for large LMs that are extremely costly to retrain. Furthermore, as pointed out by Brown et al. (2022) , data preprocessing methods assume private information to be easily identifiable, specified, and removed and DP algorithms can only guarantee protection for information that has clear privacy borders, which make them inadequate in the real-world scenarios where the standard of privacy might differ by each individuals. To this end, we propose knowledge unlearning (Figure 1 ) as an efficient solution that can be applied with just a few parameter updates instead of pretraining the underlying LM again. We perform experiments on GPT-Neo LMs (125M, 1.3B, 2.7B) (Black et al., 2021) and show that simply changing the gradient descent to the opposite direction during language modeling (which can also be seen as maximizing instead of minimizing the loss function) is effective at protecting target sequences from extraction attacks with little to no performance degradation on the initial LM capabilities measured via 9 common NLP classification benchmarks (Hellaswag (Zellers et al., 2019 ), Lambada (Paperno et al., 2016) , Winogrande (Sakaguchi et al., 2021) , COPA (Gordon et al., 2012) , ARC-Easy (Clark et al., 2018) , ARC-Challenge (Clark et al., 2018 ), Piqa (Bisk et al., 2020 ), MathQA (Amini et al., 2019 ), and PubmedQA (Jin et al., 2019) ) and 4 dialogue tasks (Wizard of Wikipedia (Dinan et al., 2019 ), Empathetic Dialogues (Rashkin et al., 2019 ), Blended Skill Talk (Smith et al., 2020) , and Wizard of Internet (Komeili et al., 2022) ). For some cases, knowledge unlearning unexpectedly shows significant improvements in LM performance for some of the benchmarks. We compare our approach with data deduplication method (Kandpal et al., 2022) and differential privacy decoding method (Majmudar et al., 2022) which are both known to mitigate privacy risks, and show the effectiveness of knowledge unlearning by providing a strong privacy protection while being much more efficient and robust. We also provide a general guideline that can be used to quantify the memorization and extraction likelihood of target token sequences and suggest when we can empirically consider them to have been "forgotten". Specifically, we introduce a novel metric that measures the extraction likelihood by varying the prefix length of the target token sequence and quantifying how much of the suffix is actually extracted from the LM. Surprisingly, for knowledge unlearning, we find that it is easier to forget a chunk of instances sequentially rather than trying to forget them all at once. We provide further analysis and show that the difficulty of knowledge unlearning depends heavily on the target data being forgotten, especially the domain of the target data. We also provide empirical examples of performing extraction attacks and how exactly knowledge unlearning provides a privacy protection for the LM. To summarize, our main contributions are fourfold: • We compare knowledge unlearning with two approaches from literature known to mitigate privacy risks: a data preprocessing approach and a Differential Privacy (DP) Decoding approach. We show that our approach results in little to no performance degradation of general capabilities (sometimes resulting in improvement) while providing a strong privacy protections in situations individuals practice their RTBF whereas the data preprocessing approach provides a weaker privacy protection while being orders of magnitude computationally demanding and the DP Decoding approach results in a severe degradation of modeling performance.



Figure 1: Comparison of previous approaches and knowledge unlearning when an individual practices his/her Right-To-Be-Forgotten (RTBF).

