MACHINE LEARNING ALGORITHMS FOR DATA LA-BELING: AN EMPIRICAL EVALUATION

Abstract

The lack of labeled data is a major problem in both research and industrial settings since obtaining labels is often an expensive and time-consuming activity. In the past years, several machine learning algorithms were developed to assist and perform automated labeling in partially labeled datasets. While many of these algorithms are available in open-source packages, there is no research that investigates how these algorithms compare to each other in different types of datasets and with different percentages of available labels. To address this problem, this paper empirically evaluates and compares seven algorithms for automated labeling in terms of accuracy. We investigate how these algorithms perform in six different and well-known datasets with three different types of data, images, texts, and numerical values. We evaluate these algorithms under two different experimental conditions, with 10% and 50% labels of available labels in the dataset. Each algorithm, in each dataset for each experimental condition, is evaluated independently ten times with different random seeds. The results are analyzed and the algorithms are compared utilizing a Bayesian Bradley-Terry model. The results indicate that while the algorithms label spreading with K-nearest neighbors perform better in the aggregated results, the active learning algorithms query by instance QBC and query instance uncertainty sample perform better when there is only 10% of labels available. These results can help machine learning practitioners in choosing optimal machine learning algorithms to label their data.

1. INTRODUCTION

Supervised learning is the most commonly used machine learning paradigms. There are problems with supervised learning and machine learning in general. The first problem is that machine learning requires huge amounts of data. Secondly, supervised learning needs labels in the data. In a case study performed with industry, several labeling issues were found (Anonymous, 2020a) . A recent systematic literature review was conducted to see what type of machine learning algorithms exist to make the labeling easier. A recent systematic literature review investigated the use of Semisupervised learning and Active learning for automatic labeling of data (Anonymous, 2020b) . From those results the authors concluded which active and semi-supervised learning algorithms were the most popular and which datatypes they can be used on. However, even if there has been work done on active and semi-supervised learning, these learning paradigms are still very new for many companies and consequentially seldomly used. Utilizing a simulation study we evaluated seven semi-supervised and active learning algorithms on six datasets of different types, numerical, text and image data. Implementing a Bayesian Bradley Terry model we ranked the algorithms according to accuracy and effort. The contribution of this paper is to provide a taxonomy of automatic labeling algorithms and an empirical evaluation of algorithms in the taxonomy evaluated across two dimensions: Performance, how accurate the algorithm is, and Effort, how much manual work has to be done from the data scientist. The remainder of this paper is organized as follows. In the upcoming section we provide the an overview about semi-supervised and active learning algorithms and how they work. In section 3 we will describe our study, how we preformed the simulations, what datasets and source code we used, and what kind of metrics we used to evaluate performance, effort and applicability. In section 4 we provide the results from the simulation study and finally, we will interpret the results and conclude the paper in section 5.

2. BACKGROUND

2.1 ACTIVE LEARNING Suppose a large unlabeled dataset is to be used for training a classification algorithm. Active Learning (AL),poses query strategies on the data and selects points to be labeled according to a measure of informativeness called a Query Strategy. After the instances has been labeled with the help of the oracle, the machine learning algorithm is trained with this newly labeled data. If the learner thinks that the accuracy of the algorithm is too low and that the accuracy can be improved, the learner will request new and or replace some of the old labels. The algorithm will then be re-trained and evaluated once again. This procedure will continue iterative until some other stopping criteria has been reached. As a reference on AL, the reader is recommended to look at other sources such as (Settles, 2012) . We shall now present the query strategies that we used in this text. Uncertainty Sampling is according to (Anonymous, 2020b) the most commonly used active learning strategy. The idea of this approach is query the instances that we are the least certain about and then label these. Uncertainty sampling strategies are very commonly used and work especially well for probabilistic algorithms such as logistic regression according to (Lewis & Catlett, 1994) . (Lewis & Catlett, 1994) concluded that uncertainty sampling has the ability to outperform random sampling by evaluating and comparing it to on a text classification dataset and (Joshi et al., 2009) concluded the same on image data by comparing accuracy scores on two uncertainty-sampling based methods and random sampling. Query-by-Committee(QBC) means that we train a committee of classifiers and then query the instance on which the committee disagrees. Add the newly labeled instance to the labeled training data and retrain the algorithm on the new training set and repeat this procedure. What is important here is the way we measure disagreement. Some way to measure disagreement is through entropy, vote-entropy and KL divergence (Settles, 2012) . QBC is relatively straightforward to implement and are applicable to any basic machine learning mode. (Seung et al., 1992) and (Freund et al., 1997) were the first to formulate QBC. In Seung et al. (1992) they use Monte Carlo simulation to show that QBC can outperform random sampling. Random sampling is when the learner chooses to query the instances randomly and not according to any strategy. If a learner does not choose his query strategy carefully with respect to his data and machine learning algorithm, then active learning might not outperform choosing your instances randomly.

2.2. SEMI-SUPERVISED LEARNING

Semi-supervised machine learning is a class of machine learning algorithms that utilizes both labeled and unlabeled data. Semi-supervised algorithms are then trained on both the unlabeled and the labeled data and in some cases it even outperforms supervised classifiers. For more information on semi-supervised learning we refer the reader to (Zhu, 2005) . According to (Anonymous, 2020b) the second most popular semi-supervised learning algorithms are the graph-based algorithms. The idea of these algorithms is to build a graph from the training data. These graphs contains both labeled and unlabeled instances. Let each pair (x i , y i ) and (x j , y j ) represent each vertex and its corresponding label. Let the edge weight w ij represent the weight of the edge between vertex i and vertex j. The larger w ij becomes the more similar are the labels of both vertices. The question is then how to compute the weight w ij . Two examples of graph-based methods are Label Propagation and Label Spreading (Zha et al., 2009) . Label propagation was first introduced in (Zhu & Ghahramani, 2002) and presented as follows. Given labeled and unlabeled data, define the weight matrix w ij . The probabilistic transition matrix T is defined as the probability of jumping from vertex i to vertex j T ij := P (j → i) = w ij l+u k=1 w kj .

