SELF-GUIDED NOISE-FREE DATA GENERATION FOR EFFICIENT ZERO-SHOT LEARNING

Abstract

There is a rising interest in further exploring the zero-shot learning potential of large pre-trained language models (PLMs). A new paradigm called data-generationbased zero-shot learning has achieved impressive success. In this paradigm, the synthesized data from the PLM acts as the carrier of knowledge, which is used to train a task-specific model with orders of magnitude fewer parameters than the PLM, achieving both higher performance and efficiency than prompt-based zero-shot learning methods on PLMs. The main hurdle of this approach is that the synthesized data from PLM usually contains a significant portion of low-quality samples. Fitting on such data will greatly hamper the performance of the taskspecific model, making it unreliable for deployment. Previous methods remedy this issue mainly by filtering synthetic data using heuristic metrics(e.g., output confidence), or refining the data with the help of a human expert, which comes with excessive manual tuning or expensive costs. In this paper, we propose a novel noise-robust re-weighting framework SUNGEN to automatically construct high-quality data for zero-shot classification problems. Our framework features the ability to learn the sample weights indicating data quality without requiring any human annotation. We theoretically and empirically verify the ability of our method to help construct good-quality synthetic datasets. Notably, SUNGEN-LSTM yields a 9.8% relative improvement than the baseline on average accuracy across eight different established text classification tasks.

1. INTRODUCTION

Owing to the superior generative capacity of large-scale pre-trained language models (PLMs), there has been an emerging trend of using these powerful models (e.g., GPT) to generate training data for downstream tasks (Anaby-Tavor et al., 2020; Puri et al., 2020; Kumar et al., 2020; Lee et al., 2021, inter alia) . Among them, a new line of generation-based zero-shot learning using the unfinetuned PLM pushes the envelope further (Schick & Schütze, 2021; Ye et al., 2022a; Meng et al., 2022) , featuring total annotation-free training for downstream tasks. Ye et al. (2022a) (ZEROGEN) further boosts the efficiency by using the generated data to train tiny task models (TAM), which have ordersof-magnitude fewer parameters than the PLM. Specifically, they first design prompts incorporating the task description and label information, then use them to guide the data generation from the PLM. Subsequently, the synthesized dataset is used to train the tiny task-specific models. Compared with the classic prompt-based zero-shot learning on PLM, this new paradigm enjoys two favorable properties: (1) since the task model has orders-of-magnitude fewer parameters than the PLM, it demonstrates much lower inference latency; (2) with the large amount of PLM-generated training data, the task model often shows better performance than prompt-based zero-shot PLM counterparts. In the above paradigm, the amount and the quality of the generated data are crucial factors for the task model's performance. Unfortunately, despite the unlimited training data that one can generate in theory, the data quality is not always guaranteed. Our experimental observation across many downstream tasks verifies the existence of this issue: in ZEROGEN, after a few training epochs on the PLM-generated dataset, although the training accuracy steadily improves, the actual test accuracy of the model starts declining rapidly (e.g., IMDb in Figure 1 ) -a clear indication of the model overfitting to low-quality data (noisy data) (Arpit et al., 2017) . More specifically, we identify two major cases of noisy samples in the synthetic dataset: corrupted labels and task-irrelevant samples (Table 6 in Appendix) . Without any task-related fine-tuning, it is challenging for PLM to follow a user's instruction (task-specific prompt including label information) to generate accurate samples in the target domain (Ouyang et al., 2022) . To alleviate the data quality issue, recent work adopts human-active labeling to correct the corrupted label or revise the example (Wang et al., 2021a; Liu et al., 2022) . However, such methods introduce considerable costs and may be unrealistic. To avoid human intervention, the classic approach to eliminate the effect of noisy data is to re-weight the samples. The core idea is to design a weighting function w, such that the correct samples are associated with larger weights and the noisy ones with smaller weights. Compared with heuristic design of w (e.g., according to output confidence, loss value) (Liu & Tao, 2015; Wang et al., 2021b) , which requires taskspecific knowledge and excessive manual tuning, the adaptive methods that learn the sample weights in an end-to-end manner demonstrate better performances in practice (Ren et al., 2018; Shu et al., 2019; Zheng et al., 2021) . Those methods typically formulate the learning of sample weights into a bi-level optimization problem, with a clean validation set in the outer loop to guide the learning of w. Despite remarkable success was achieved, their dependence on a clean validation set becomes a major limitation, which is especially impractical in zero-shot setting. Our solution comes from rethinking the choice of the outer objective in the bi-level framework: can we design an objective such that the sample weights can be optimized with only access to the noisy synthetic data? To this end, we resort to a family of noise-robust loss functions (ℓ robust ) (Ghosh et al., 2017; Zhang & Sabuncu, 2018) . These functions were adopted by previous work to train the neural network under label noise due to their theoretically noise-tolerant property (Ghosh et al., 2017; Zhang & Sabuncu, 2018; Wang et al., 2019) . However, from the optimization point of view, such loss functions suffer from instability and difficulty when training the neural networks (Zhang & Sabuncu, 2018) , which limits their effectiveness. Remarkably, our approach leverages the noise-tolerant property of these losses, while avoiding their pathology. We propose a novel bi-level re-weighting framework SUNGEN: in the inner loop, we train the task model using weighted training loss based on current sample weights; in the outer loop, the noise-robust loss is adopted to guide the learning of the sample weights. The two procedures are performed alternatively to generate a set of weights indicating the importance of samples. Notably, our method focuses on enhancing the quality of generated data, while improving the generator (e.g., modify PLM parameter, prompt engineering) is an orthogonal direction and can be applied jointly with our method. Our main contributions are threefold. First, we propose a novel end-to-end framework to construct high-quality noise-free synthetic dataset, without the aid of any human annotation ( §3). Second, we offer theoretical justification ( §4) and empirical verification ( §5.2) for SUNGEN's ability of recovering a noise-free dataset reliably with synthetic data only. Third, we conduct experiments on eight text classification datasets and show our method outperforms the current baseline by large margins ( §5.2).

2. BACKGROUND 2.1 PROMPT-BASED ZERO-SHOT LEARNING

We first introduce prompt-based zero-shot prediction (named PROMPTING). Given a manuallydesigned prompt T (•) and a query example x i ∈ X , PROMPTING constructs a sentence T (x i ) (e.g.,



Figure 1: Training and testing accuracy of LSTM model trained on synthetic dataset. After training for more epochs, the testing performance of ZERO-GEN starts to deteriorate significantly, indicating that the model starts to fit the erroneous data.

