PROPER MEASURE FOR ADVERSARIAL ROBUSTNESS

Abstract

This paper analyzes the problems of adversarial accuracy and adversarial training. We argue that standard adversarial accuracy fails to properly measure the robustness of classifiers. Its definition has a tradeoff with standard accuracy even when we neglect generalization. In order to handle the problems of the standard adversarial accuracy, we introduce a new measure for the robustness of classifiers called genuine adversarial accuracy. It can measure the adversarial robustness of classifiers without trading off accuracy on clean data and accuracy on the adversarially perturbed samples. In addition, it does not favor a model with invariance-based adversarial examples, samples whose predicted classes are unchanged even if the perceptual classes are changed. We prove that a single nearest neighbor (1-NN) classifier is the most robust classifier according to genuine adversarial accuracy for given data and a norm-based distance metric when the class for each data point is unique. Based on this result, we suggest that using poor distance metrics might be one factor for the tradeoff between test accuracy and l p norm-based test adversarial robustness.

1. INTRODUCTION

Even though deep learning models have shown promising performances in image classification tasks (Krizhevsky et al., 2012) , most deep learning classifiers are vulnerable to adversarial attackers. By applying a carefully crafted, but imperceptible perturbation to input images, so-called adversarial examples can be constructed that cause the classifier to misclassify the perturbed inputs (Szegedy et al., 2013) . These vulnerabilities have been shown to be exploitable even when printed adversarial images were read through a camera (Kurakin et al., 2016) . Adversarial examples for a specific classifier can be transferable to other models (Goodfellow et al., 2014) . The transferability of adversarial examples (Papernot et al., 2017) enables attackers to exploit vulnerabilities even with limited access to the target classifier. Problem setting. In a nonempty clean input set X ⊂ R d , let every sample x exclusively belong to one of the classes Y, and their classes will be denoted as c x . A classifier f assigns a class label from Y for each sample x ∈ R d . Assume f is parameterized by θ and L(θ, x, y) is the cross entropy loss of the classifier provided the input x and the label y ∈ Y. Note that this exclusive class assumption is introduced to simplify the analysis. Otherwise, the definition of adversarial examples (Biggio et al., 2013) may not match with our intuition as explained in Section 1.1.

1.1. ADVERSARIAL EXAMPLES

Definition 1 (Adversarial Example). Given a clean sample x ∈ X and a maximum permutation norm (threshold) , a perturbed sample x is an adversarial example if x -x ≤ and f (x ) = c x (Biggio et al., 2013) . When exclusive class assumption in the problem setting is violated, different oracle classifiers may assign different classes for the same clean samples. (Oracle classifiers refer to classifiers that are robust against adversarial examples (Biggio et al., 2013) for appropriately large . Human classifications are usually considered as oracle classifiers.) For example, while many people assign class 7 for the top right sample shown in Figure 1 , some people can assign class 1 or 9 because of the ambiguity of that example. If we label data with the most popularly assigned classes, according

