NEAR-OPTIMAL CORESETS FOR ROBUST CLUSTERING

Abstract

We consider robust clustering problems in R d , specifically k-clustering problems (e.g., k-MEDIAN and k-MEANS) with m outliers, where the cost for a given center set C ⊂ R d aggregates the distances from C to all but the furthest m data points, instead of all points as in classical clustering. We focus on the ϵ-coreset for robust clustering, a small proxy of the dataset that preserves the clustering cost within ϵ-relative error for all center sets. Our main result is an ϵ-coreset of size O(m + poly(kϵ -1 )) that can be constructed in near-linear time. This significantly improves previous results, which either suffers an exponential dependence on (m + k) (Feldman & Schulman, 2012), or has a weaker bi-criteria guarantee (Huang et al., 2018). Furthermore, we show this dependence in m is nearly-optimal, and the fact that it is isolated from other factors may be crucial for dealing with large number of outliers. We construct our coresets by adapting to the outlier setting a recent framework (Braverman et al., 2022) which was designed for capacity-constrained clustering, overcoming a new challenge that the participating terms in the cost, particularly the excluded m outlier points, are dependent on the center set C. We validate our coresets on various datasets, and we observe a superior size-accuracy tradeoff compared with popular baselines including uniform sampling and sensitivity sampling. We also achieve a significant speedup of existing approximation algorithms for robust clustering using our coresets.

1. INTRODUCTION

We give near-optimal ϵ-coresets for k-MEDIAN and k-MEANS (and more generally, (k, z)-CLUSTERING) with outliers in Euclidean spaces. Clustering is a central task in data analysis, and popular center-based clustering methods, such as k-MEDIAN and k-MEANS, have been widely applied. In the vanilla version of these clustering problems, given a center set of k points C, the objective is usually defined by the sum of (squared) distances from each data point to C. This formulation, while quite intuitive and simple to use, has severe robustness issues when dealing with noisy/adversarial data; for instance, an adversary may add few noisy outlier points that are far from the center to "fool" the clustering algorithm to wrongly put centers towards those points in order to minimize the cost. Indeed, such robustness issue introduced by outliers has become a major challenge in data science and machine learning, and it attracted extensive algorithmic research on the topic (Charikar et al., 2001; Chen, 2008; Candès et al., 2011; Chawla & Gionis, 2013; Mount et al., 2014; Gupta et al., 2017; Statman et al., 2020; Ding & Wang, 2020) . Moreover, similar issues have also been studied from the angle of statistics (Huber & Ronchetti, 2009) .

Robust Clutering

We consider robust versions of these clustering problems, particularly a natural and popular variant, called clustering with outliers (Charikar et al., 2001) . Specifically, given a dataset X ⊂ R d , the (k, z, m)-ROBUST CLUSTERING problem is to find a center set C ⊂ R d of k

