COVERAGE-CENTRIC CORESET SELECTION FOR HIGH PRUNING RATES

Abstract

One-shot coreset selection aims to select a representative subset of the training data, given a pruning rate, that can later be used to train future models while retaining high accuracy. State-of-the-art coreset selection methods pick the highest importance examples based on an importance metric and are found to perform well at low pruning rates. However, at high pruning rates, they suffer from a catastrophic accuracy drop, performing worse than even random sampling. This paper explores the reasons behind this accuracy drop both theoretically and empirically. We first propose a novel metric to measure the coverage of a dataset on a specific distribution by extending the classical geometric set cover problem to a distribution cover problem. This metric helps explain why coresets selected by SOTA methods at high pruning rates perform poorly compared to random sampling because of worse data coverage. We then propose a novel one-shot coreset selection method, Coverage-centric Coreset Selection (CCS), that jointly considers overall data coverage upon a distribution as well as the importance of each example. We evaluate CCS on five datasets and show that, at high pruning rates (e.g., 90%), it achieves significantly better accuracy than previous SOTA methods (e.g., at least 19.56% higher on CIFAR10) as well as random selection (e.g., 7.04% higher on CIFAR10) and comparable accuracy at low pruning rates. We make our code publicly available at GitHub 1 .

1. INTRODUCTION

One-shot coreset selection aims to select a small subset of the training data that can later be used to train future models while retaining high accuracy (Coleman et al., 2019; Toneva et al., 2018) . One-shot coreset selection is important because full datasets can be massive in many applications and training on them can be computationally expensive. A favored way to select coresets is to assign an importance score to each example and select more important examples to form the coreset (Paul et al., 2021; Sorscher et al., 2022) . Unfortunately, current SOTA methods for one-shot coreset selection suffer a catastrophic accuracy drop under high pruning rates (Guo et al., 2022; Paul et al., 2021) . For example, for the CIFAR-10, a SOTA method (forgetting score (Toneva et al., 2018)) achieves 95.36% accuracy with a 30% pruning rate, but that accuracy drops to only 34.03% at a 90% pruning rate (which is significantly worse than random coreset selection). This accuracy drop is currently unexplained and limits the extent to which coresets can be practically reduced in size. In this paper, we provide both theoretical and empirical insights into reasons for the catastrophic accuracy drop and propose a novel coreset selection algorithm that overcomes this issue. We first extend the classical geometrical set cover problem to a density-based distribution cover problem and provide theoretical bounds on model loss as a function of properties of a coreset providing specific coverage on a distribution. Furthermore, based on theoretical analysis, we propose a novel metric AUC pr , which allows us to quantify how a dataset covers a specific distribution (Section 3.1). With the proposed metric, we show that coresets selected by SOTA methods at high pruning rates have much worse data coverage than random pruning, suggesting a linkage between poor data coverage



https://github.com/haizhongzheng/Coverage-centric-coreset-selection 1

