LINKING AVERAGE-AND WORST-CASE PERTURBATION ROBUSTNESS VIA CLASS SELECTIVITY AND DIMENSIONALITY

Abstract

Representational sparsity is known to affect robustness to input perturbations in deep neural networks (DNNs), but less is known about how the semantic content of representations affects robustness. Class selectivity-the variability of a unit's responses across data classes or dimensions-is one way of quantifying the sparsity of semantic representations. Given recent evidence that class selectivity may not be necessary for, and in some cases can impair generalization, we sought to investigate whether it also confers robustness (or vulnerability) to perturbations of input data. We found that class selectivity leads to increased vulnerability to average-case (naturalistic) perturbations in ResNet18, ResNet50, and ResNet20, as measured using Tiny ImageNetC (ResNet18 and ResNet50) and CIFAR10C (ResNet20). Networks regularized to have lower levels of class selectivity are more robust to average-case perturbations, while networks with higher class selectivity are more vulnerable. In contrast, we found that class selectivity increases robustness to multiple types of worst-case (i.e. white box adversarial) perturbations, suggesting that while decreasing class selectivity is helpful for average-case perturbations, it is harmful for worst-case perturbations. To explain this difference, we studied the dimensionality of the networks' representations: we found that the dimensionality of early-layer representations is inversely proportional to a network's class selectivity, and that adversarial samples cause a larger increase in early-layer dimensionality than corrupted samples. We also found that the input-unit gradient was more variable across samples and units in high-selectivity networks compared to low-selectivity networks. These results lead to the conclusion that units participate more consistently in low-selectivity regimes compared to high-selectivity regimes, effectively creating a larger attack surface and hence vulnerability to worst-case perturbations.

1. INTRODUCTION

Methods for understanding deep neural networks (DNNs) often attempt to find individual neurons or small sets of neurons that are representative of a network's decision (Erhan et al., 2009; Zeiler and Fergus, 2014; Karpathy et al., 2016; Amjad et al., 2018; Lillian et al., 2018; Dhamdhere et al., 2019; Olah et al., 2020) . Selectivity in individual units (i.e. variability in a neuron's activations across semantically-relevant data features) has been of particular interest to researchers trying to better understand deep neural networks (DNNs) (Zhou et al., 2015; Olah et al., 2017; Morcos et al., 2018; Zhou et al., 2018; Meyes et al., 2019; Na et al., 2019; Zhou et al., 2019; Rafegas et al., 2019; Bau et al., 2020; Leavitt and Morcos, 2020) . However, recent work has shown that selective neurons can be irrelevant, or even detrimental to network performance, emphasizing the importance of examining distributed representations for understanding DNNs (Morcos et al., 2018; Donnelly and Roegiest, 2019; Dalvi et al., 2019b; Leavitt and Morcos, 2020) . In parallel, work on robustness seeks to build models that are robust to perturbed inputs (Szegedy et al., 2013; Carlini and Wagner, 2017a; b; Vasiljevic et al., 2016; Kurakin et al., 2017; Gilmer et al., 2018; Zheng et al., 2016) . Hendrycks and Dietterich (2019) distinguish between two types of robustness: corruption robustness, which measures a classifier's performance on low-quality or naturalistically-perturbed inputs-and thus is an "average-case" measure-and adversarial robustness, which measures a classifier's performance on small, additive perturbations that are tailored to the classifier-and thus is a "worst-case" measure. 1 Research on robustness has been predominantly focused on worst-case perturbations, which is affected by weight and activation sparsity (Madry et al., 2018; Balda et al., 2020; Ye et al., 2018; Guo et al., 2018; Dhillon et al., 2018) and representational dimensionality (Langeberg et al., 2019; Sanyal et al., 2020; Nayebi and Ganguli, 2017) . But less is known about the mechanisms underlying average-case perturbation robustness and its common factors with worst-case robustness. Some techniques for improving worst-case robustness also improve average-case robustness (Hendrycks and Dietterich, 2019; Ford et al., 2019; Yin et al., 2019) , thus it is possible that sparsity and representational dimensionality also contribute to average-case robustness. Selectivity in individual units can be also be thought of a measure of the sparsity with which semantic information is represented. 2 And because class selectivity regularization provides a method for controlling selectivity, and class selectivity regularization has been shown to improve test accuracy on unperturbed data (Leavitt and Morcos, 2020), we sought to investigate whether it could be utilized to improve perturbation robustness and elucidate the factors underlying it. In this work we pursue a series of experiments investigating the causal role of selectivity in robustness to worst-case and average-case perturbations in DNNs. To do so, we used a recently-developed class selectivity regularizer (Leavitt and Morcos, 2020) to directly modify the amount of class selectivity learned by DNNs, and examined how this affected the DNNs' robustness to worst-case and average-case perturbations. Our findings are as follows: • Networks regularized to have lower levels of class selectivity are more robust to average-case perturbations, while networks with higher class selectivity are generally less robust to average-case perturbations, as measured in ResNets using the Tiny ImageNetC and CIFAR10C datasets. The corruption robustness imparted by regularizing against class selectivity was consistent across nearly all tested corruptions. • In contrast to its impact on average-case perturbations, decreasing class selectivity reduces robustness to worst-case perturbations in both tested models, as assessed using gradient-based white-box attacks. • The variability of the input-unit gradient across samples and units is proportional to a network's overall class selectivity, indicating that high variability in perturbability within and across units may facilitate worst-case perturbation robustness. • The dimensionality of activation changes caused by corruption markedly increases in early layers for both perturbation types, but is larger for worst-case perturbations and low-selectivity networks. This implies that representational dimensionality may present a trade-off between worst-case and average-case perturbation robustness. Our results demonstrate that changing class selectivity, and hence the sparsity of semantic representations, can confer robustness to average-case or worst-case perturbations, but not both simultaneously. They also highlight the roles of input-unit gradient variability and representational dimensionality in mediating this trade-off.

2.1. PERTURBATION ROBUSTNESS

The most commonly studied form of robustness in DNNs is robustness to adversarial attacks, in which an input is perturbed in a manner that maximizes the change in the network's output while 1 We use the terms "worst-case perturbation" and "average-case perturbation" instead of "adversarial attack" and "corruption", respectively, because this usage is more general and dispenses with the implied categorical distinction of using seemingly-unrelated terms. Also note that while Hendrycks and Dietterich (2019) assign specific and distinct meanings to "perturbation" and "corruption", we use the term "perturbation" more generally to refer to any change to an input. 2 Class information is semantic. And because class selectivity measures the degree to which class information is represented in individual neurons, it can be considered a form of sparsity. For example, if a network has high test accuracy on a classification task, it is necessarily representing class (semantic) information. But if the mean class selectivity across units is low, then the individual units do not contain much class information, thus the class information must be distributed across units; the semantic representation in this case is not sparse, it is distributed.

