NEAR-OPTIMAL CORESETS FOR ROBUST CLUSTERING

Abstract

We consider robust clustering problems in R d , specifically k-clustering problems (e.g., k-MEDIAN and k-MEANS) with m outliers, where the cost for a given center set C ⊂ R d aggregates the distances from C to all but the furthest m data points, instead of all points as in classical clustering. We focus on the ϵ-coreset for robust clustering, a small proxy of the dataset that preserves the clustering cost within ϵ-relative error for all center sets. Our main result is an ϵ-coreset of size O(m + poly(kϵ -1 )) that can be constructed in near-linear time. This significantly improves previous results, which either suffers an exponential dependence on (m + k) (Feldman & Schulman, 2012), or has a weaker bi-criteria guarantee (Huang et al., 2018). Furthermore, we show this dependence in m is nearly-optimal, and the fact that it is isolated from other factors may be crucial for dealing with large number of outliers. We construct our coresets by adapting to the outlier setting a recent framework (Braverman et al., 2022) which was designed for capacity-constrained clustering, overcoming a new challenge that the participating terms in the cost, particularly the excluded m outlier points, are dependent on the center set C. We validate our coresets on various datasets, and we observe a superior size-accuracy tradeoff compared with popular baselines including uniform sampling and sensitivity sampling. We also achieve a significant speedup of existing approximation algorithms for robust clustering using our coresets.

1. INTRODUCTION

We give near-optimal ϵ-coresets for k-MEDIAN and k-MEANS (and more generally, (k, z)-CLUSTERING) with outliers in Euclidean spaces. Clustering is a central task in data analysis, and popular center-based clustering methods, such as k-MEDIAN and k-MEANS, have been widely applied. In the vanilla version of these clustering problems, given a center set of k points C, the objective is usually defined by the sum of (squared) distances from each data point to C. This formulation, while quite intuitive and simple to use, has severe robustness issues when dealing with noisy/adversarial data; for instance, an adversary may add few noisy outlier points that are far from the center to "fool" the clustering algorithm to wrongly put centers towards those points in order to minimize the cost. Indeed, such robustness issue introduced by outliers has become a major challenge in data science and machine learning, and it attracted extensive algorithmic research on the topic (Charikar et al., 2001; Chen, 2008; Candès et al., 2011; Chawla & Gionis, 2013; Mount et al., 2014; Gupta et al., 2017; Statman et al., 2020; Ding & Wang, 2020) . Moreover, similar issues have also been studied from the angle of statistics (Huber & Ronchetti, 2009) .

Robust Clutering

We consider robust versions of these clustering problems, particularly a natural and popular variant, called clustering with outliers (Charikar et al., 2001) . Specifically, given a dataset X ⊂ R d , the (k, z, m)-ROBUST CLUSTERING problem is to find a center set C ⊂ R d of k cost (m) z (X, C) := min L⊆X:|L|=m x∈X\L (dist(x, C)) z . (1) Here, L denotes the set of outliers, dist denotes the Euclidean distance, and dist(x, C) := min c∈C dist(x, c). Intuitively, the outliers capture the furthest points in a cluster which are "not well-clustered" and are most likely to be the noise. Notice that the parameter z captures various Computational Challenges However, the presence of outliers introduces significant computational challenges, and it inspires a series of research to design efficient algorithms for robust clustering. On one hand, approximation algorithms with strict accuracy guarantee has been obtained (Charikar et al., 2001; Chen, 2008; Gupta et al., 2017; Krishnaswamy et al., 2018; Feng et al., 2019; Friggstad et al., 2019; Zhang et al., 2021) but their running time is a high-degree polynomial which is impractical. On the other hand, more scalable algorithms were also proposed (Bhaskara et al., 2019; Deshpande et al., 2020) , however, the approximation ratio is worse, and a more severe limitation is that their guarantee usually violates the required number of outliers. Moreover, to the best of our knowledge, we are not aware of works that design algorithms in sublinear models, such as streaming and distributed computing. Coresets In order to tackle the computational challenges, we consider coresets for robust clustering. Roughly, an ϵ-coreset is a tiny proxy of the massive input dataset, on which the clustering objective is preserved within ϵ-error for every potential center set. Existing algorithms may benefit a significant speedup if running on top of a coreset, and more importantly, coresets can be used to derive sublinear algorithms, including streaming algorithms (Har-Peled & Mazumdar, 2004) , distributed algorithms (Balcan et al., 2013) and dynamic algorithms (Henzinger & Kale, 2020), which are highly useful to deal with massive datasets. Stemming from Har-Peled & Mazumdar ( 2004), the study of coresets for the non-robust version of clustering, i.e., (k, z)-CLUSTERING, has been very fruitful (Feldman & Langberg, 2011; Feldman et al., 2020; Sohler & Woodruff, 2018; Huang & Vishnoi, 2020; Braverman et al., 2021; Cohen-Addad et al., 2021b; Braverman et al., 2022) , and the state-of-the-art coreset achieves a size poly(kϵ -1 ), independent of d and n. However, coresets for robust clustering were much less understood. Existing results either suffers an exponential (k + m) k+m factor in the coreset size (Feldman & Schulman, 2012) , or needs to violate the required number of outliers (Huang et al., 2018) . This gap leads to the following question: can we efficiently construct an ϵ-coreset of size poly(m, k, ϵ -1 ) for (k, z, m)-ROBUST CLUSTERING (without violating the number of outliers)? 1.1 OUR CONTRIBUTIONS Our main contribution, stated in Theorem 1.1, is a near-optimal coreset for robust clustering, affirmatively answering the above question. In fact, we not only achieve poly(m), but also linear in m and is isolated from other factors. This can be very useful when the number of outliers m is large. Theorem 1.1 (Informal; see Theorem 3.1). There exists a near-linear time algorithm that given data set X ⊂ R d , z ≥ 1, ϵ ∈ (0, 0.3) and integers k, m ≥ 1, computes an ϵ-coreset of X for (k, z, m)- ROBUST CLUSTERING of size O(m) + 2 O(z log z) Õ(k 3 ϵ -3z-2 ), with constant probability. Our coreset improves over previous results in several aspects. Notably, compared with Feldman & Schulman (2012), our result avoids their exponential (k + m) k+m factor in the coreset size which is likely to be impractical since typical values of k and/or m may be O(log n). In fact, as observed in our experiments, the value of m can be as large as 1500 in real datasets, so the dependence in Feldman & Schulman ( 2012) is prohibitively large which leads to an inferior practical performance



(robust) clustering problems, including (k, m)-ROBUST MEDIAN (where z = 1), (k, m)-ROBUST MEANS (where z = 2). On the other hand, if the number of outliers m = 0 then the robust clustering problem falls back to the non-robust version. The (k, z, m)-ROBUST CLUSTERING problem has been widely studied in the literature(Chen, 2008; Gupta et al., 2017; Krishnaswamy et al., 2018;  Friggstad et al., 2019; Statman et al., 2020). Moreover, the idea of removing outliers has been also considered in other machine learning tasks, e.g., robust PCA (Bhaskara & Kumar, 2018) and robust regressionRousseeuw & Leroy (1987); Mount et al. (2014).

