INTERACTIVE WEAK SUPERVISION: LEARNING USEFUL HEURISTICS FOR DATA LABELING

Abstract

Obtaining large annotated datasets is critical for training successful machine learning models and it is often a bottleneck in practice. Weak supervision offers a promising alternative for producing labeled datasets without ground truth annotations by generating probabilistic labels using multiple noisy heuristics. This process can scale to large datasets and has demonstrated state of the art performance in diverse domains such as healthcare and e-commerce. One practical issue with learning from user-generated heuristics is that their creation requires creativity, foresight, and domain expertise from those who hand-craft them, a process which can be tedious and subjective. We develop the first framework for interactive weak supervision in which a method proposes heuristics and learns from user feedback given on each proposed heuristic. Our experiments demonstrate that only a small number of feedback iterations are needed to train models that achieve highly competitive test set performance without access to ground truth training labels. We conduct user studies, which show that users are able to effectively provide feedback on heuristics and that test set results track the performance of simulated oracles.

1. INTRODUCTION

The performance of supervised machine learning (ML) hinges on the availability of labeled data in sufficient quantity and quality. However, labeled data for applications of ML can be scarce, and the common process of obtaining labels by having annotators inspect individual samples is often expensive and time consuming. Additionally, this cost is frequently exacerbated by factors such as privacy concerns, required expert knowledge, and shifting problem definitions. Weak supervision provides a promising alternative, reducing the need for humans to hand label large datasets to train ML models (Riedel et al., 2010; Hoffmann et al., 2011; Ratner et al., 2016; Dehghani et al., 2018) . A recent approach called data programming (Ratner et al., 2016) combines multiple weak supervision sources by using an unsupervised label model to estimate the latent true class label, an idea that has close connections to modeling workers in crowd-sourcing (Dawid & Skene, 1979; Karger et al., 2011; Dalvi et al., 2013; Zhang et al., 2014) . The approach enables subject matter experts to specify labeling functions (LFs)-functions that encode domain knowledge and noisily annotate subsets of data, such as user-specified heuristics or external knowledge bases-instead of needing to inspect and label individual samples. These weak supervision approaches have been used on a wide variety of data types such as MRI sequences and unstructured text, and in various domains such as healthcare and e-commerce (Fries et al., 2019; Halpern et al., 2014; Bach et al., 2019; Ré et al., 2020) . Not only does the use of multiple sources of weak supervision provide a scalable framework for creating large labeled datasets, but it can also be viewed as a vehicle to incorporate high level, conceptual feedback into the data labeling process. In data programming, each LF is an imperfect but reasonably accurate heuristic, such as a pre-trained classifier or keyword lookup. For example, for the popular 20 newsgroups dataset, an LF to identify the class 'sci.space' may look for the token 'launch' in documents and would be right about 70% of the time. While data programming can be very effective when done right, experts may spend a significant amount of time designing the weak supervision sources (Varma & Ré, 2018) and must often inspect samples at random to generate ideas (Cohen-Wang et al., 2019) . In our 20 newsgroups example, we may randomly see a document mentioning 'Salman Rushdie' and realize that the name of a famous atheist could be a good heuristic to identify posts in 'alt.atheism'. While such a heuristic seems obvious after the fact, we have to chance upon the right documents to generate these ideas. In practice, coming up with effective LFs becomes difficult after the first few. Substantial foresight (Ramos et al., 2020) is required to create a new function that applies to a non-negligible subset of given data, is novel, and adds predictive value. We propose a new approach for training supervised ML models with weak supervision through an interactive process, supporting domain experts in fast discovery of good LFs. The method queries users in an active fashion for feedback about candidate LFs, from which a model learns to identify LFs likely to have good accuracy. Upon completion, our approach produces a final set of LFs. We use this set to create an estimate of the latent class label via an unsupervised label model and train a final, weakly supervised end classifier using a noise aware loss function on the estimated labels as in Ratner et al. (2016) . The approach relies on the observation that many applications allow for heuristics of varying quality to be generated at scale (similar to Varma & Ré ( 2018)), and that experts can provide good judgment by identifying some LFs that have reasonable accuracy. The full pipeline of the proposed approach, termed Interactive Weak Supervision (IWS)foot_0 , is illustrated in Fig. 1 . Our contributions are: 1. We propose, to the best of our knowledge, the first interactive method for weak supervision in which queries to be annotated are not data points but labeling functions. This approach automates the discovery of useful data labeling heuristics. 2. We conduct experiments with real users on three classification tasks, using both text and image datasets. Our results support our modeling assumptions, demonstrate competitive test set performance of the downstream end classifier, and show that users can provide accurate feedback on automatically generated LFs. 3. In our results, IWS shows superior performance compared to standard active learning, i.e. we achieve better test set performance with a smaller number of queries to users. In text experiments with real users, IWS achieves a mean test set AUC after 200 LF annotations that requires at least three times as many active learning iterations annotating data points. In addition, the average user response time for LF queries was shorter than for the active learning queries on data points.

2. RELATED WORK

Active strategies for weak supervision sources have largely focused on combinations of data programming with traditional active learning on data points, while our work has similarities to active learning on features (Druck et al., 2009) and active learning of virtual evidence (Lang & Poon, 2021). In Nashaat et al. (2018) , a pool of samples is created on which LFs disagree, and active learning strategies are then applied to obtain labels for some of the samples. In Cohen-Wang et al. (2019) , samples where LFs abstain or disagree most are selected and presented to users in order to inspire the creation of new LFs. In Hancock et al. (2018) , natural language explanations provided during text labeling are used to generate heuristics. The proposed system uses a semantic parser to convert explanations into logical forms, which represent labeling functions.



Code is available at https://github.com/benbo/interactive-weak-supervision



Figure 1: Interactive Weak Supervision (IWS) helps experts discover good labeling functions (LFs).

