ON THE PERILS OF CASCADING ROBUST CLASSIFIERS

Abstract

Ensembling certifiably robust neural networks is a promising approach for improving the certified robust accuracy of neural models. Black-box ensembles that assume only query-access to the constituent models (and their robustness certifiers) during prediction are particularly attractive due to their modular structure. Cascading ensembles are a popular instance of black-box ensembles that appear to improve certified robust accuracies in practice. However, we show that the robustness certifier used by a cascading ensemble is unsound. That is, when a cascading ensemble is certified as locally robust at an input x (with respect to ), there can be inputs x in the -ball centered at x, such that the cascade's prediction at x is different from x and thus the ensemble is not locally robust. Our theoretical findings are accompanied by empirical results that further demonstrate this unsoundness. We present cascade attack (CasA), an adversarial attack against cascading ensembles, and show that: (1) there exists an adversarial input for up to 88% of the samples where the ensemble claims to be certifiably robust and accurate; and (2) the accuracy of a cascading ensemble under our attack is as low as 11% when it claims to be certifiably robust and accurate on 97% of the test set. Our work reveals a critical pitfall of cascading certifiably robust models by showing that the seemingly beneficial strategy of cascading can actually hurt the robustness of the resulting ensemble. Our code is available at https://github.com/TristaChi/ensembleKW. * Equal Contribution 1 Percentage of inputs where the classifier is accurate and certified as locally robust. 1

1. INTRODUCTION

Local robustness has emerged as an important requirement of classifier models. It ensures that models are not susceptible to misclassifications caused by small perturbations to correctly classified inputs. A lack of robustness can be exploited by not only malicious actors (in the form of adversarial examples (Szegedy et al., 2014) ) but can also lead to incorrect behavior in the presence of natural noise (Gilmer et al., 2019) . However, ensuring local robustness of neural network classifiers has turned out to be a hard challenge. Although neural networks can achieve state-of-the-art classification accuracies on a variety of important tasks, neural classifiers with comparable certified robust accuracies 1 (CRA, Def. 2.2) remain elusive, even when trained in a robustness-aware manner (Madry et al., 2018; Wong & Kolter, 2018; Cohen et al., 2019; Leino et al., 2021) . In light of the limitations of robustness-aware training, ensembling certifiably robust neural classifiers has been shown to be a promising approach for improving certified robust accuracies (Wong et al., 2018; Yang et al., 2022) . An ensemble combines the outputs of multiple base classifiers to make a prediction, and is a well-known mechanism for improving classification accuracy when one only has access to weak learners (Dietterich, 2000; Bauer & Kohavi, 1999) . Ensembles designed to improve CRA take one of two forms. White-box ensembles (Yang et al., 2022; Zhang et al., 2019; Liu et al., 2020) assume white-box access to the constituent models. They

