LINKING AVERAGE-AND WORST-CASE PERTURBATION ROBUSTNESS VIA CLASS SELECTIVITY AND DIMENSIONALITY

Abstract

Representational sparsity is known to affect robustness to input perturbations in deep neural networks (DNNs), but less is known about how the semantic content of representations affects robustness. Class selectivity-the variability of a unit's responses across data classes or dimensions-is one way of quantifying the sparsity of semantic representations. Given recent evidence that class selectivity may not be necessary for, and in some cases can impair generalization, we sought to investigate whether it also confers robustness (or vulnerability) to perturbations of input data. We found that class selectivity leads to increased vulnerability to average-case (naturalistic) perturbations in ResNet18, ResNet50, and ResNet20, as measured using Tiny ImageNetC (ResNet18 and ResNet50) and CIFAR10C (ResNet20). Networks regularized to have lower levels of class selectivity are more robust to average-case perturbations, while networks with higher class selectivity are more vulnerable. In contrast, we found that class selectivity increases robustness to multiple types of worst-case (i.e. white box adversarial) perturbations, suggesting that while decreasing class selectivity is helpful for average-case perturbations, it is harmful for worst-case perturbations. To explain this difference, we studied the dimensionality of the networks' representations: we found that the dimensionality of early-layer representations is inversely proportional to a network's class selectivity, and that adversarial samples cause a larger increase in early-layer dimensionality than corrupted samples. We also found that the input-unit gradient was more variable across samples and units in high-selectivity networks compared to low-selectivity networks. These results lead to the conclusion that units participate more consistently in low-selectivity regimes compared to high-selectivity regimes, effectively creating a larger attack surface and hence vulnerability to worst-case perturbations.

1. INTRODUCTION

Methods for understanding deep neural networks (DNNs) often attempt to find individual neurons or small sets of neurons that are representative of a network's decision (Erhan et al., 2009; Zeiler and Fergus, 2014; Karpathy et al., 2016; Amjad et al., 2018; Lillian et al., 2018; Dhamdhere et al., 2019; Olah et al., 2020) . Selectivity in individual units (i.e. variability in a neuron's activations across semantically-relevant data features) has been of particular interest to researchers trying to better understand deep neural networks (DNNs) (Zhou et al., 2015; Olah et al., 2017; Morcos et al., 2018; Zhou et al., 2018; Meyes et al., 2019; Na et al., 2019; Zhou et al., 2019; Rafegas et al., 2019; Bau et al., 2020; Leavitt and Morcos, 2020) . However, recent work has shown that selective neurons can be irrelevant, or even detrimental to network performance, emphasizing the importance of examining distributed representations for understanding DNNs (Morcos et al., 2018; Donnelly and Roegiest, 2019; Dalvi et al., 2019b; Leavitt and Morcos, 2020) . In parallel, work on robustness seeks to build models that are robust to perturbed inputs (Szegedy et al., 2013; Carlini and Wagner, 2017a; b; Vasiljevic et al., 2016; Kurakin et al., 2017; Gilmer et al., 2018; Zheng et al., 2016) . Hendrycks and Dietterich (2019) distinguish between two types of robustness: corruption robustness, which measures a classifier's performance on low-quality or naturalistically-perturbed inputs-and thus is an "average-case" measure-and adversarial robustness,

