

Abstract

Robustness of convolutional neural networks (CNNs) has gained in importance on account of adversarial examples, i.e., inputs added as well-designed perturbations that are imperceptible to humans but can cause the model to predict incorrectly. Recent research suggests that the noise in adversarial examples breaks the textural structure, which eventually leads to wrong predictions. To mitigate the threat of such adversarial attacks, we propose defective convolutional networks that make predictions rely less on textural information but more on shape information by properly integrating defective convolutional layers into standard CNNs. The defective convolutional layers contain defective neurons whose activations are set to be a constant function. As defective neurons contain no information and are far different from standard neurons in its spatial neighborhood, the textural features cannot be accurately extracted, and so the model has to seek other features for classification, such as the shape. We show extensive evidence to justify our proposal and demonstrate that defective CNNs can defend against black-box attacks better than standard CNNs. In particular, they achieve state-of-the-art performance against transfer-based attacks without any adversarial training being applied.

1. INTRODUCTION

Deep learning (LeCun et al., 1998; 2015) , especially deep Convolutional Neural Network (CNN) (Krizhevsky et al., 2012) , has led to state-of-the-art results spanning many machine learning fields (Girshick, 2015; Chen et al., 2018; Luo et al., 2020) . Despite the great success in numerous applications, recent studies show that deep CNNs are vulnerable to some well-designed input samples named as Adversarial Examples (Szegedy et al., 2013; Biggio et al., 2013) . Take the task of image classification as an example, for almost every commonly used well-performed CNN, attackers are able to construct a small perturbation on an input image, which is almost imperceptible to humans but can make the model give a wrong prediction. The problem is serious as some well-designed adversarial examples can be transferred among different kinds of CNN architectures (Papernot et al., 2016b) . As a result, a machine learning system can be easily attacked even if the attacker does not have access to the model parameters, which seriously affect its use in practical applications. There is a rapidly growing body of work on how to obtain a robust CNN, mainly based on adversarial training (Szegedy et al., 2013; Goodfellow et al., 2015; Madry et al., 2017; Buckman et al., 2018; Mao et al., 2019) . However, those methods need lots of extra computation to obtain adversarial examples at each time step and tend to overfit the attacking method used in training (Buckman et al., 2018) . In this paper, we tackle the problem in a perspective different from most existing methods. In particular, we explore the possibility of designing new CNN architectures which can be trained using standard optimization methods on standard benchmark datasets and can enjoy robustness by themselves, without appealing to other techniques. Recent studies (Geirhos et al., 2017; 2018; Baker et al., 2018; Brendel & Bethge, 2019) show that the predictions of standard CNNs mainly depend on the texture of objects. However, the textural information has a high degree of redundancy and may be easily injected with adversarial noise (Yang et al., 2019; Hosseini et al., 2019) . Also, Cao et al. (2020) ; Das et al. (2020) finds adversarial attack methods may perturb local patches to contain textural features of incorrect classes. All the literature suggests that the wrong prediction by CNNs for adversarial examples mainly comes from the change in the textural information. The small perturbation of adversarial examples will change the textures and eventually affect the features extracted by the CNNs. Therefore, a natural way to avoid adversarial examples is to let the CNN make predictions relying less on textures but more on other information, such as the shape, which cannot be severely distorted by small perturbations. In practice, sometimes a camera might have mechanical failures which cause the output image to have many defective pixels (such pixels are always black in all images). Nonetheless, humans can still recognize objects in the image with defective pixels since we are able to classify the objects even in the absence of local textural information. Motivated by this, we introduce the concept of defectiveness into the convolutional neural networks: we call a neuron a defective neuron if its output value is fixed to zero no matter what input signal is received; similary, a convolutional layer is a defective convolutional layer if it contains defective neurons. Before training, we replace the standard convolutional layers with the defective version on a standard CNN and train the network in the standard way. As defective neurons of the defective convolutional layer contain no information and are very different from their spatial neighbors, the textural information cannot be accurately extracted from the bottom defective layers to top layers. Therefore, we destroy local textural information to a certain extent and prompt the neural network to rely more on other information for classification. We call the architecture deployed with defective convolutional layers as defective convolutional network. We find that applying the defective convolutional layers to the bottomfoot_0 layers of the network and introducing various patterns for defective neurons arrangement across channels are critical. In summary, our main contributions are: • We propose Defective CNNs and four empirical evidences to justify that, compared to standard CNNs, the defective ones rely less on textures and more on shapes of the inputs for making predictions. • Experiments show that Defective CNNs has superior defense performance than standard CNNs against transfer-based attacks, decision-based attacks, and additive Gaussian noise. • Using the standard training method, Defective CNN achieves state-of-the-art results against two transfer-based black-box attacks while maintaining high accuracy on clean test data. • Through proper implementation, Defective CNNs can save a lot of computation and storage costs; thus may lead to a practical solution in the real world.

2. RELATED WORK

Various methods have been proposed to defend against adversarial examples. One line of research is to derive a meaningful optimization objective and optimize the model by adversarial training (Szegedy et al., 2013; Goodfellow et al., 2015; Huang et al., 2015; Madry et al., 2017; Buckman et al., 2018; Mao et al., 2019) . The high-level idea of these works is that if we can predict the potential attack to the model during optimization, then we can give the attacked sample a correct signal and use it during training. Another line of research is to take an adjustment to the input image before letting it go through the deep neural network (Liao et al., 2017; Song et al., 2017; Samangouei et al., 2018; Sun et al., 2018; Xie et al., 2019; Yuan & He, 2020) . The basic intuition behind this is that if we can clean the adversarial attack to a certain extent, then such attacks can be defended. Although these methods achieve some success, a major difficulty is that it needs a large extra cost to collect adversarial examples and hard to apply on large-scale datasets. Several studies (Geirhos et al., 2017; 2018; Baker et al., 2018; Brendel & Bethge, 2019) show that the prediction of CNNs is mainly from the texture of objects but not the shape. Also, Cao et al. (2020) ; Das et al. (2020) found that adversarial examples usually perturb a patch of the original image to contain the textural feature of incorrect classes. For example, the adversarial example of the panda image is misclassified as a monkey because a patch of the panda skin is perturbed adversarially so that it alone looks like the face of a monkey (see Figure 11 in (Cao et al., 2020) ). All previous works above suggest that the CNN learns textural information more than shape and the adversarial attack might come from textural-level perturbations. This is also correlated with robust features (Tsipras et al., 2018; Ilyas et al., 2019; Hosseini et al., 2019; Yang et al., 2019) which has attracted more interest recently. Pixels which encode textural information contain high redundancy and may be easily deteriorated to the distribution of incorrect classes. However, shape information is more compact and thus may serve as a more robust feature for predicting.

3.1. DESIGN OF DEFECTIVE CONVOLUTIONAL LAYERS

In this subsection, we introduce our proposed defective convolutional neural networks and discuss the differences between the proposed method and related topics. First, we briefly introduce the notations. For one convolutional layer, denote x as the input and z as the output of neurons in the layer. Note that x may be the input image or the output of the last convolutional layer. The input x is usually a M × N × K tensor in which M/N are the height/width of a feature map, and K is the number of feature maps, or equivalently, channels. Denote w and b as the parameters (e.g., the weights and biases) of the convolutional kernel. Then a standard convolutional layer can be mathematically defined as below. Standard convolutional layer: x = w * x + b, (1) z = f (x ), where f (•) is a non-linear activation function such as ReLUfoot_1 and * is the convolutional operation. The convolutional filter receives signals in a patch and extracts local textural information from the patch. As mentioned in the introduction, recent works suggest that the prediction of standard CNNs strongly depends on such textural information, and noises imposed on the texture may lead to wrong predictions. Therefore, we hope to learn a feature extractor which does not solely rely on textural features and also considers other information. To achieve this goal, we introduce the defective convolutional layer in which some neurons are purposely designed to be corrupted. Define M defect to be a binary matrix of size M × N × K. Our defective convolutional layer is defined as follows. Defective convolutional layer: x = w * x + b, z = f (x ) (4) z = M defect • z , where • denotes element-wise product. M defect is a fixed matrix and is not learnable during training and testing. We can see that M defect plays a role of "masking" out values of some neurons in the layer. This disturbs the distribution of local textural information and decouples the correlation among neurons. With the masked output z as input, the feature extractor of the next convolutional layer cannot accurately capture the local textural feature from x. As a consequence, the textural information is hard to pass through the defective CNN from bottom to top. To produce accurate predictions, the deep neural network has to find relevant signals other than the texture, e.g., the shape. Those corrupted neurons have no severe impact on the extraction of shape information since neighbors of those neurons in the same filter are still capable of passing the shape information to the next layer. In this paper, we find that simply setting M defect by random initialization is already helpful for learning a robust CNN. Before training, we sample each entry in M defect using Bernoulli distribution with keep probability p and then fix M defect during training and testing. More discussions and ablation studies are provided in Section 4. As can be seen from Equation (3)-( 5), the implementation of our defective convolutional layer is similar to the dropout operation (Srivastava et al., 2014) . To demonstrate the relationship and differences, we mathematically define the dropout as below. Standard convolutional layer + dropout: M dropout ∼ Bernoulli(p) (6) x = w * x + b (7) z = f (x ) (8) z = M dropout • z . ( ) The shape of M dropout is the same as M defect , and the value of each entry in M dropout is sampled in each batch using some sampling strategies at each step during training. Generally, entries in M dropout are independent and identically sampled in an online fashion using Bernoulli distribution with keep probability p. There are several significant differences between dropout and defective convolutional layer. First, the binary matrix M dropout is sampled online during training and is removed during testing, while the binary matrix M defect in defective convolutional layers is predefined and keeps fixed in both training and testing. The predefined way can help Defective CNNs save a lot of computation and storage costs. Second, the motivations behind the two methods are quite different and may lead to differences in the places to applying methods, the values of the keep probability p, and the shape of the masked unit. Dropout tries to reduce overfitting by preventing co-adaptations on training data. When comes to CNNs, those methods are applied to top layers, p is set to be large (e.g., 0.9), and the masked units are chosen to be a whole channel in Tompson et al. (2015) and a connected block in Ghiasi et al. (2018) . However, our method tries to prevent the model extract textural information of inputs for making predictions. We would apply the method to bottom layers, use a small p (e.g. 0.1), and the masked unit is a single neuron. Also, in our experiments, we will show that the proposed method can improve the robustness of CNNs against transfer-based attacks and decisionbased attacks, while the dropout methods cannot.

3.2. DEFECTIVE CNNS RELY LESS ON TEXTURE BUT MORE ON SHAPE FOR PREDICTING

In this subsection, we provide extensive analysis to show Defective CNNs that, compared to the standard CNNs, rely less on textures and more on shapes of the inputs for making predictions. First, we design a particular image manipulation in which the local texture of the object in an image is preserved while the shape is destroyed. Particularly, we divide an image into k × k patches and randomly relocate those patches to form a new image. An example is shown in Figure 1 . A model that more focuses on the shape cues should achieve lower performance on such images while it is trained on the normal dataset. We manipulate a set of images and test whether a defective CNN and a standard CNN can make correct predictions. The experimental details are described as follows. We first construct a defective CNN by applying defective convolutional layers to the bottom layers of a standard ResNet-18, and train the defective CNN along with a standard ResNet-18 on the ImageNet dataset. Then, we sample images from the validation set which are predicted correctly by both of the CNNs. We make manipulations to the sampled images by setting k ∈ {2, 4, 8}, feed these images to the networks and check their classification accuracy. The results in Table 1 , 13 show that when the shape information is destroyed but the local textural information is preserved, Defective CNNs perform consistently worse than standard CNNs, thus verifying out intuition. From another perspective, if a model makes predictions relying more on shape information, the manipulation of the shape of objects will play a larger role in generating adversarial examples. To verify this, we train defective and standard CNNs on CIFAR-10 and Tiny-ImageNet, and then attack on the validation set. Furthermore, perturbations generated by standard CNNs and additive Gaussian noises usually would not affect the shape information (Szegedy et al., 2013; Ford et al., 2019) . A model is supposed to recognize those adversarial examples better if it relies much on shape information for predictions. In Section 4, we show that defective CNNs achieve higher defense performance than standard CNN against the two types of attack.

4. EXPERIMENTS

In real-world tasks, attackers usually cannot access the parameters of the target models and thus need to transfer adversarial examples generated by their models. This setting of attack is referred to as transfer-based attacks (Liu et al., 2016; Kurakin et al., 2016) . Sometimes, attackers can get the final model decision and raise the more powerful decision-based attacks (Brendel et al., 2017) . Both the two types of black-box attack are available in most real-world scenarios and should be considered. Recently, Ford et al. (2019) bridge the adversarial robustness and corruption robustness (Hendrycks & Dietterich, 2018) , and points out that a successful adversarial defend method should also effectively defend against additive Gaussian noise. Therefore, to meet the requirements for practical systems, we examine the performance of models against transfer-based attacks, decision-based attacks, and additive Gaussian noise. In the following sections, we evaluate the robustness of defective CNNs with different architectures and compare with state-of-the-art defense methods against transfer-based attacks, and then make ablation studies on possible design choices of defective CNN. Due to space limitation, we list the experiments of decision-based attacks, additive Gaussian noise, gray-box attacks, white-box attacks, and more results of transfer-based attacks in Appendix A. Note that, in this paper, all the successful defense rates except the rates listed in Table 2 , 3 are calculated on the samples whose corresponding original images can be classified correctly by the tested model. This can erase the influence of test accuracy that different models have different test accuracy on clean data, and thus help evaluate the robustness of models.

4.1.1. EXPERIMENTAL SETTINGS

We evaluate the defense performance of Defective CNNs against transfer-based attacks and compare with state-of-the-art defense methods on CIFAR-10 and Tiny-ImageNet. For CIFAR-10, we follow the setting used in Madry et al. (2017) , and use a standard ResNet-18 to generate adversarial examples by FGSM (Goodfellow et al., 2015) and PGD (Kurakin et al., 2016) . The two attack methods both have perturbation scale ∞ = 8/255 and PGD runs for 7 gradient descent steps with step size 2/255. For Tiny-ImageNet, we follow the setting used in Mao et al. (2019) and use a standard ResNet-50 to generate adversarial examples by PGD with ∞ = 8/255, steps 20, and step size 2/255. We would compare with two types of defense methods including the variants of adversarial training (Madry et al., 2017; Kannan et al., 2018; Mao et al., 2019) and approaches that try to erase the adversarial noise of inputs (Wang & Yu, 2019; Addepalli et al., 2020; Yuan & He, 2020) . To validate the difference between the proposed method and dropout methods, we also compare with two CNN variants SpatialDropout (Tompson et al., 2015) and DropBlock (Ghiasi et al., 2018) . For both methods, we follow the instruction from Ghiasi et al. (2018) to apply dropout to the {3 rd , 4 th } block with keep probability p = 0.9. The block of DropBlock is set to be a 3 × 3 square. For our method, we use the corresponding network structure but applying defective convolutional layers to the bottom layers (see illustrations in Appendix C). We use keep probability p = 0.1 and train the model with the standard optimization method. Training details and curves can be found in Appendix D. Second, we test our proposed method in different architectures on the CIFAR-10 dataset. We apply defective convolutional layers, in a way which is similar to the experiment above, to five popular network architectures: ResNet-18 (He et al., 2016) , ResNet-50, DenseNet-121 (Huang et al., 2017) , SENet-18 (Hu et al., 2017b) and VGG-19 (Simonyan & Zisserman, 2014) . For each architecture, we replace the standard convolutional layer with the defective version on the bottom layers (see illustrations in Appendix C). We then test the black-box defense performance against transfer-based attacks on 5000 samples from the validation set. Adversarial examples are generated by PGD, which runs for 20 steps with step size 1 and the ∞ perturbation scale is set to 16/255. Results on MNIST can be found in Appendix A.

4.1.2. EXPERIMENTAL RESULTS

Table 2 , 3 show the results on CIFAR-10 and Tiny-ImageNet, respectively. We can see the proposed method outperforms all the adversarial training variants which need extra training costs and most of the cleaning inputs methods. Although Yuan & He (2020) is competitive with the proposed method in CIFAR-10, it need to collect adversarial examples and run inner loops, thus largely increase timestamps. Also, we can conclude that spatial dropout and drop block do not improve the robustness of standard CNNs. The results show the strengths of our proposed method on both robustness and generalization, even though our model is only trained on clean data. Also, it is interesting that the CNNs can maintain such clean accuracy even 90% neurons in bottom layers are dropped.

Method FGSM PGD Clean Acc

Standard CNN 55.92% 15.96% 95.03% Standard CNN + SD (Tompson et al., 2015) 52.11% 12.98% 95.44% Standard CNN + DB (Ghiasi et al., 2018) 56.27% 14.69% 95.38% BPFC (Addepalli et al., 2020) 75.52% 77.07% 82.30% Adv. Training (Madry et al., 2017) 77.10% 78.10% 87.14% TLA (Mao et al., (Tompson et al., 2015) 8.43% 61.82% Standard CNN + DB (Ghiasi et al., 2018) 9.15% 61.37% Adv. Training (Madry et al., 2017) 27.73% 44.77% ALP (Kannan et al., 2018) 30.31% 41.53% TLA (Mao et al., 2019) 29.98% 40.89% Defective CNN 32.32% 55.74% Table 3 : Defense performance on Tiny-ImageNet. Second, we list the black-box defense results of applying defective convolutional layers to various architectures in Table 4 . The results show that defective convolutional layers consistently improve the robustness of various network architectures against transfer-based attacks. We can also see that the trend of robustness increases as the keep probability becomes smaller. 

4.2. ABLATION STUDIES

There are several design choices of the defective CNN, which include the appropriate positions to apply defective convolutional layers, the choice of the keep probabilities, the benefit of breaking symmetry, as well as the diversity introduced by randomness. In this subsection, we conduct a series of comparative experiments and use black-box defense performance against transfer-based attacks as the evaluation criterion. In our experiments, we found that the performance is not sensitive to the choices on the source model to attack and the target model to defense. Here, we only list the performances using DenseNet-121 as the source model and ResNet-18 as the target model on the CIFAR-10 dataset and leave more experimental results in Appendix A.10. The results are listed in Table 5 . Defective Layers on Bottom layers vs. Top Layers, Keep Probabilities. We apply defective layers with different keep probabilities to the bottom layers and the top layers of the standard CNNs (see illustrations in Appendix C). Comparing the results of the models with the same keep probability but different parts being masked, we find that applying defective layers to bottom layers enjoys significantly higher success defense rates, while applying to the top layers cannot. This corroborates the phenomena shown in literature (Zeiler & Fergus, 2014; Mordvintsev et al., 2015) , where bottom (Carlini & Wagner, 2016) with confidence κ = 40. Numbers in the middle mean the success defense rates. layers mainly contribute to detect the edges and shape, while the receptive fields of neurons in top layers are too large to respond to the location sensitive information. Also, we find that the defense accuracy monotonically increases as the test accuracy decreases along with the keep probability (See the trend map in Appendix A.9). The appropriate value for the keep probability mainly depends on the relative importance of generalization and robustness. Another practical way is to ensemble Defective CNNs with different keep probabilities. Defective Neuron vs. Defective Channel. As our method independently selects defective neurons on different channels in a layer, we break the symmetry of the original CNN structure. To see whether this asymmetric structure would help, we try to directly mask whole channels instead of neurons using the same keep probability as the defective layer and train it to see the performance. This defective channel method does not hurt the symmetry while also leading to the same decrease in the number of convolutional operations. Table 5 shows that although our defective CNN suffers a small drop in test accuracy due to the low keep probability, we have a great gain in the robustness, compared with the defective-channel CNN. Defective Masks are Shared Among Channels or Not. The randomness in generating masks in different channels and layers allows each convolutional filter to focus on different input patterns. Also, it naturally involves various topological structures for local feature extraction instead of the expensive learning way (Dai et al., 2017; Chang et al., 2018; Zhu et al., 2019) . We show the essentiality of generating various masks per layer via experiments that compare to a method that only randomly generates one mask per layer and uses it in every channel. Table 5 shows that applying the same mask to each channel will decrease the test accuracy. This may result from the limitation of expressivity due to the monotone masks at every channel of the defective layer.

5. CONCLUSION

In this paper, we introduce and experiment on defective CNNs, a modified version of existing CNNs that makes CNNs capture more information other than local textures, especially the shape. We propose four empirical evidence to justify this and also show that Defective CNNs can achieve high robustness against black-box attacks while maintaining high test accuracy. Another insight resulting from our experiments is that the adversarial perturbations generated against defective CNNs can actually change the semantic information of images and may even "fool" humans. We hope that these findings bring more understanding on adversarial examples and the robustness of neural networks. In this subsection, we evaluate the defense performance of networks with defective convolutional layers against the decision-based attack. Decision-based attack performs based on the prediction of the model. It needs less information from the model and has the potential to perform better against adversarial defenses based on gradient masking. Boundary attack (Brendel et al., 2017) is one effective decision-based attack. The attack will start from a point that is already adversarial by applying a large scale perturbation to the original image and keep decreasing the distance between the original image and the adversarial example by random walks. After iterations, we will get the final perturbation, which has a relatively small scale. The more robust the model is, the larger the final perturbation will be.

DEFECTIVE CONVOLUTIONAL NETWORKS APPENDIX

In our experiments, we use the implementation of boundary attack in Foolbox (Rauber et al., 2017) . It finds the adversarial initialization by simply adding large scale uniform noise on input images. We perform our method on ResNet-18 and test the performance on CIFAR-10 with 500 samples from the validation set. The 5-block structure of ResNet-18 is shown in Appendix Figure 2 . The blocks are labeled 0, 1, 2, 3, 4 and the 0 th block is the first convolution layer. We apply the defective layer structure with keep probability p = 0.1 to the bottom blocks (the 0 th , 1 st , 2 nd blocks). For comparison, we implement label smoothing (Szegedy et al., 2016) with smoothing parameter = 0.1 on a standard ResNet-18, and spatial dropout and drop block with the setting same as Section 4.1.1.

A.1.2 EXPERIMENTAL RESULTS

We use the median squared to evaluate the performance, which is defined as 2 -distance of final perturbation across all samples proposed in (Brendel et al., 2017) . The score S(M ) is defined as Median i 1 N P M i 2 2 , where P M i ∈ R N is the final perturbation that the Boundary attack finds on model M for the i th image. Before computing P M i , the images are normalized into [0, 1] N .

S(M )

Standard CNN 7.3e-06 Standard CNN + SD (Tompson et al., 2015) 7.2e-06 Standard CNN + DB (Ghiasi et al., 2018) 6.1e-06 Standard CNN + LS (Szegedy et al., 2016) From the results in Table 6 , we point out that spatial dropout and drop block can not enhance the robustness against the boundary attack. Neither does the label smoothing technique. This is consistent with the discovery in Section 4.1.1, and in Papernot et al. (2016a) where they point out that label smoothing is a kind of gradient masking method. Also, the defective CNN achieves higher performance over the standard CNN.

A.2.1 EXPERIMENTAL SETTINGS

In this subsection, we evaluate the defense performance of networks with defective convolutional layers against additive Gaussian noise. Recently, Ford et al. (2019) bridge the adversarial robustness and corruption robustness, and points out that a successful adversarial defense method should also effectively defend against additive Gaussian noise. Also the Gaussian noises usually do not change the shape of objects, our models should have better defense performance. To see whether our structure is more robust in this setting, we feed input images with additive Gaussian noises to both standard and defective CNNs. To obtain noise of scales similar to the adversarial perturbations, we generate i.i.d. Gaussian random variables x ∼ N (0, σ 2 ), where σ ∈ {1, 2, 4, 8, 12, 16, 20, 24 , 28, 32}, clip them to the range [-2σ, 2σ] and then add them to every pixel of the input image. Note that, the magnitude range of Gaussian noises used in our experiments covers all 5 severity levels used in Hendrycks & Dietterich (2018) . For CIFAR-10, we add Gaussian noises to 5000 samples which are drawn randomly from the validation set and can be classified correctly by all the tested models. We place the defective layers with keep probability p = 0.1 on ResNet-18 in the same way as we did in Section A.1.

A.2.2 EXPERIMENTAL RESULTS

The experimental results are shown in Figure 4 . The standard CNN is still robust to small scale Gaussian noise such as σ ≤ 8. After that, the performance of the standard CNN begins to drop sharply as σ increases. In contrast, defective CNNs show far better robustness than the standard version. The defective CNN with keep probability 0.1 can maintain high accuracy until σ increase to 16 and have a much slower downward trend as σ increases. Both FGSM (Goodfellow et al., 2015) and PGD (Kurakin et al., 2016) attacks are run on the entire validation set of CIFAR-10 dataset. These two methods both have ∞ perturbation scale 8/255 and PGD runs for 7 gradient descent steps with step size 2. The generated adversarial examples are used to attack target networks. For the target network, we use the same structure but applying defective convolutional layers to the 0 th and 1 st blocks with keep probability p = 0.1 and train the model using the standard optimization method. As is mentioned in Section 3.1, our proposed method is essentially different from dropout, and thus we also take dropout methods as baselines. More specifically, we test SpatialDropout and DropBlock. For both methods, we follow the instruction from Ghiasi et al. (2018) to apply dropout to the 3 rd block with p = 0.9. The block of DropBlock is set to be a 3 × 3 square. The result is listed in The diagonal shows gray-box performances in the setting that the source and target networks share the same structure but with different initializations.

A.7 WHITE-BOX ATTACK

In this subsection, we show the white-box defense performance of defective CNNs. Table 12 shows the results of ResNet-18 on the CIFAR-10 dataset. The performance on other network architectures is similar. Note that, the proposed method would not involve any obfuscated gradients (Athalye et al., 2018) . Also, We study the combination of the proposed method and adversarial training. We adversarially train a defective CNN under the same setting described in Madry et al. (2017) We want to emphasize that the adversarial examples generated by defective CNNs appear to have semantic shapes and may even fool humans as well (see Figure 3 and Appendix B). This indicates that small perturbations can actually change the semantic meaning of images for humans. Those samples should probably not be categorized into adversarial examples and used to evaluate whitebox robustness. This is also aligned with Ilyas et al. (2019) .

A.8 RANDOMLY SHUFFLED IMAGES AND STYLIZED-IMAGENET

In this subsection, we show more results on randomly shuffled images and Stylized-ImageNet (Geirhos et al., 2018) . As shown in Section 3.2, shape information in randomly shuffled images is destroyed while textural information preserving, and Stylized-ImageNet has the opposite situation. If a CNN make predictions relying less on textural information but more on shape information, it should have worse performance on randomly shuffled images but better performance on Stylized-ImageNet. We construct defective CNNs by applying defective convolutional layers to the bottom layers of standard . We train all defective CNNs and their plain counterparts on the ImageNet dataset. For each pair of CNNs, we sample images from the validation set, which are predicted correctly by both two kinds of CNNs. We make manipulations to the sampled images by setting k ∈ {2, 4, 8} and pick corresponding images from Stylized-ImageNet. We check the accuracy of all models on the these images. The results are shown in Table 13 . We can see the defective CNNs perform consistently worse than the standard CNNs on the randomly shuffled images, and perform consistently better than the standard CNNs on Stylized-ImageNet. This justifies our argument that defective CNNs make predictions relying less on textural information but more on shape information. A.9 DIFFERENT KEEP PROBABILITIES In this subsection, we show the trade-off between robustness and generalization performance in defective CNNs with different keep probabilities. We use DenseNet-121 (Huang et al., 2017) as the source model to generate adversarial examples from CIFAR-10 with PGD (Kurakin et al., 2016) , which runs for 20 steps with step size 1 and perturbation scale 16. The defective convolutional layers are applied to the bottom layers of ResNet-18 (He et al., 2016) . Figure 5 shows that the defense accuracy monotonically increases as the test accuracy decreases along with the keep probability. We can see the trade-off between robustness and generalization. Therefore, a practical way to use Defective CNNs in the real world is to ensemble defective CNNs with different keep probabilities. Also, in our experiments, we found that ensemble different defective CNNs with the same p can bring improvements on both accuracy and robustness while ensemble standard CNNs can not. ) and (1, 40, 32) (PGD 32 ). Nonetheless, our models also perform much better on weak PGD attacks. For the CW attack, we have also tried different confidence parameters κ. However, we find that for large κ, the algorithm is hard to find adversarial examples for some neural networks such as VGG because of its logit scale. For smaller κ, the adversarial examples have weak transferability, which means they can be easily defended even by standard CNNs. Therefore, in order to balance these two factors, we choose κ = 40 (CW 40 ) for DenseNet-121, ResNet-50, SENet-18 and κ = 20 (CW 20 ) for ResNet-18 as a good choice to compare our models with standard ones. The step number for choosing the parameter c is set to 30. Note that the noises of FGSM and PGD are considered in the sense of ∞ norm and the noise of CW is considered in the sense of 2 norm. All adversarial examples used to evaluate can fool the original network. Table 14 ,15,16,17 and 18 list our experimental results. DC means we replace defective neurons with defective channels in the corresponding blocks to achieve the same keep probability. SM means we use the same defective mask on all the channels in a layer. ×n means we multiply the number of the channels in the defective blocks by n times. EN means we ensemble five models with different defective masks of the same keep probability.

B ADVERSARIAL EXAMPLES GENERATED BY DEFECTIVE CNNS

In this subsection, we show more adversarial examples generated by defective CNNs. Figure 6 shows some adversarial examples generated on the CIFAR-10 dataset along with the corresponding original images. These examples are generated from CIFAR-10 against a defective ResNet-18 of keep probability 0.2 on the 0 th , 1 st , 2 nd blocks, a defective ResNet-18 of keep probability 0.1 on the 1 st , 2 nd blocks, and a standard ResNet-18. We use attack method MIFGSM (Dong et al., 2017) with perturbation scale α = 16 and α = 32. We also show some adversarial examples generated from Tiny-ImageNetfoot_2 along with the corresponding original images in Figure 7 . These examples are generated from Tiny-ImageNet against a defective ResNet-18 of keep probability of the keep probability 0.1 on the 1 st , 2 nd blocks and a standard ResNet-18. The attack methods are MIFGSM with scale 64 and 32, step size 1 and step number 40 and 80 respectively. The adversarial examples generated by defective CNNs exhibit more semantic shapes of their fooled classes, such as the mouth of the frog in Figure 6 . This also corroborates the point made in Tsipras et al. (2018) that more robust models will be more aligned with human perception. To further verify the adversarial examples generated by defective CNNs align better with human perception than standard CNNs, we conduct a user study. We show users a pair of adversarial examples generated by defective CNNs and standard CNNs, respectively. The corresponding labels are attached. The user will be asked which one of the pair is better aligned with the predicted label. More specifically, we generate two sets of adversarial examples on CIFAR-10 and Tiny-ImageNet by defective CNNs and standard CNNs, respectively. For each user, we randomly sample 50 pairs from the two sets and ask him/her to select. A total of 13 people are involved in our study. The results show that all users select more images generated by defective CNNs than the ones generated by standard CNNs. On average, the number of defective CNNs ones is 14 more than the number of standard CNNs ones. This supports our arguments. Based on residual networks, Zagoruyko & Komodakis (2016) proposed a wide version of residual networks which have much more channels. In our experiments, we adopt the network with a width factor of 4 and apply defective layers on the 0 th and 1 st blocks. Figure 13 shows the whole structure of WideResNet-32. 

E ATTACK APPROACHES

In this subsection, we describe the attack approaches used in our experiments. We first give an overview of how to attack a neural network in mathematical notations. Let x be the input to the neural network and f θ be the function which represents the neural network with parameter θ. The output label of the network to the input can be computed as c = arg max i f θ (x) i . In order to perform an adversarial attack, we add a small perturbation δ x to the original image and get an adversarial image x adv = x + δ x . The new input x adv should look visually similar to the original x. Here we use the commonly used ∞ -norm metric to measure similarity, i.e., we require that ||δ x || ≤ . The attack is considered successful if the predicted label of the perturbed image c adv = arg max i f θ (x adv ) i is different from c. Generally speaking, there are two types of attack methods: Targeted Attack, which aims to change the output label of an image to a specific (and different) one, and Untargeted Attack, which only aims to change the output label and does not restrict which specific label the modified example should let the network output. In this paper, we mainly use the following four gradient-based attack approaches. J denotes the loss function of the neural network and y denotes the ground truth label of x. • Fast Gradient Sign Method (FGSM). FGSM (Goodfellow et al., 2015) is a one-step untargeted method which generates the adversarial example x adv by adding the sign of the gradients multiplied by a step size to the original benign image x. Note that FGSM controls the ∞ -norm between the adversarial example and the original one by the parameter . x adv = x + • sign(∇ x J(x, y)). • Basic iterative method (PGD). PGD (Kurakin et al., 2016 ) is a multiple-step attack method which applies FGSM multiple times. To make the adversarial example still stay "close" to the original image, the image is projected to the ∞ -ball centered at the original image after every step. The radius of the ∞ -ball is called perturbation scale and is denoted by α. x 0 adv = x, x k+1 adv = Clip x,α x k adv + • sign(∇ x J(x k adv , y)) . • Momentum Iterative Fast Gradient Sign Method (MIFGSM). MIFGSM (Dong et al., 2017) is a recently proposed multiple-step attack method. It is similar to PGD, but it computes the optimize direction by a momentum instead of the gradients. The radius of the ∞ -ball is also called perturbation scale and is denoted by α. Carlini & Wagner (2016) shows that constructing an adversarial example can be formulated as solving the following optimization problem: g k+1 = µ • g k + ∇ x J(x k adv , y) ∇ x J(x k adv , y) 1 x 0 adv = x, g 0 = 0 x k+1 adv = Clip x,α x k adv + • sign(g k+1 ) . • CW Attack. x adv = arg min x c • g(x ) + ||x -x|| 2 2 , where c • g(x ) is the loss function that evaluates the quality of x as an adversarial example and the term ||x -x|| 2 2 controls the scale of the perturbation. More specifically, in the untargeted attack setting, the loss function g(x) can be defined as below, where the parameter κ is called confidence.  g(x) = max{max i =y (f (x) i ) -f (x) y , -



In this paper, bottom layer means the layer close to the input and top layer means the layer close to the output prediction. Batch normalization is popularly used on x before computing z. Here we simply omit this. https://tiny-imagenet.herokuapp.com/



Figure 1: An example image that is randomly shuffled after being divided into 2 × 2, 4 × 4 and 8 × 8 patches respectively.

Figure 2: The leftmost is an image in the ImageNet, the right three are the corresponding images in the Stylized-ImageNet.

Figure 3: First row: the adversarial examples and the labels predicted by Defective CNNs. Second row: the original images and the ground truth labels. Third row: the adversarial examples and the labels predicted by standard CNNs. Attack method is MIFGSM (Dong et al., 2017) and the perturbation scales are ∞ ∈ {16/255, 32/255}. More details can be found in Appendix B.

Figure 3 shows some examples. We can see that adversarial examples against the defective CNNs change the shape of the objects and may even fool humans as well. Compare with the adversarial examples of Figure 9 in Qin et al. (2020), our adversarial examples exhibit more salient characteristics of the adversarial classes. Also, we conduct a user study in Appendix B to show that the adversarial examples generated by Defective CNNs, compared to the standard ones, are more perceptually like the adversarial classes. The phenomenon not only supports our intuition, but also is consistent with the findings in Tsipras et al. (2018); Qin et al. (2020) that the representations learned by robust models tend to align better with human perception.

Figure 4: Defense performance against additive Gaussian noise. p-Bottom means applying defective convolutional layers with keep probability p to the bottom layers of a standard ResNet-18.

Figure 5: Relationship between success defense rates against adversarial examples generated by PGD and test accuracy with respect to different keep probabilities. Each red star represents a specific keep probability with its value written near the star.

Figure 6: CIFAR-10 dataset. First row: the adversarial examples generated by defective CNNs and the predicted labels. Second row: original images. Third row: the adversarial examples generated by the standard CNN and the predicted labels.

Figure 7: Tiny-ImageNet dataset. First row: the adversarial examples generated by defective CNNs and the predicted labels. Second row: original images. Third row: the adversarial examples generated by the standard CNN and the predicted labels.

Figure 11: The architecture of SENet-18

Figure 12: The architecture of VGG-19

Figure 13: The architecture of WideResNet-32

Left three columns are the accuracy of classifying randomly shuffled images. The rightmost column is the accuracy of training on ImageNet and testing on Stylized-ImageNet. The phenomena are similar for different architectures and can be found in Appendix A.8.

13 show that Defective CNNs achieves consistently higher transferring accuracy than standard CNNs, thus verifying our argument.

Defense performance on CIFAR-10.



Under review as a conference paper at ICLR 2021 Architecture FGSM 16 PGD 16 PGD 32 CW 40 Test Accuracy

Ablation experiments of defective CNNs. p-Bottom and p-Top mean applying defective layers with keep probability p to bottom layers and top layers respectively. p-Bottom DC means making whole channels defective with keep probability p. p-Bottom SM means using the same defective mask in every channel with keep probability p. FGSM 16 , PGD 16 and PGD 32 denote attack method FGSM with perturbation scale ∞ = 16/255, PGD with perturbation scale ∞ = 16/255 and 32/255 respectively. CW 40 denotes CW attack method







Black-box defense performances against transfer-based attacks from ensemble models on the CIFAR-10 dataset. Numbers in the middle mean the success defense rates. Networks in the first row indicate the source models which ensemble other four models except for the network itself. The .5 TRANSFER-BASED ATTACK ON MNIST In this subsection, we evaluate the defense performance of networks with defective convolutional layers against trasfer-based attack on the MINST dataset. We apply defective convolutional layers to five popular network architectures ResNet-18, ResNet-50, DenseNet-121, SENet-18, VGG-19, and test the black-box defense performance against transfer-based attacks on MNIST dataset. For each architecture, we replace the standard convolutional layer with the defective version on bottom layers of different architectures. Illustrations of defective layers applied to these network architectures can be found in Appendix C. We test the black-box defense performance against transfer-based attacks on 5000 samples from the validation set. Adversarial examples are generated by PGD which runs for 40 steps with step size 0.01 × 255 and perturbation scale 0.3 × 255.The results can be found in Table9. These results show that defective convolutional layers can consistently improve the black-box defense performance of various network architectures against transfer-based attacks on the MNIST dataset.

We generate adversarial examples on one trained defective CNN and test them on a network with the same keep probability but different sampling of defective neurons. In both of these two ways, the adversarial knows some information on the structure of the network but does not know the specific parameters of it.

Defense performances against two kinds of gray-box attacks for defective CNNs. Num-Bottom in the left column represent the networks with the same structure as the corresponding source networks but with different initialization. 0.5-Bottom DIF and 0.3-Bottom DIF in the left column represent the networks with the same keep probabilities as the corresponding source networks but with different sampling of defective neurons.From the results listed in Table10, we find that defective CNNs have similar performance on adversarial examples generated by our two kinds of gray-box attacks. This phenomenon indicates that defective CNNs with the same keep probability would catch similar information which is insensitive to the selections of defective neurons. Also, comparing with the gray-box performance of standard CNNs (See Table11), defective CNNs show stronger defense ability.

Defense performances against gray-box attacks for standard CNNs. Numbers mean the success defense rates. Networks in the first row are the source models for generating adversarial examples by PGD, which runs for 20 steps with step size 1 and perturbation scale ∞ = 16/255.

and reach 51.6% successful defense rate against the default PGD attack ( ∞ = 8/255 and 7 steps) used in training, which outperforms the standard CNN (50.0%).

Defense performances against white-box attacks. Numbers in the middle mean the success defense rates. FGSM 1 , FGSM 2 , FGSM 4 refer to FGSM with perturbation scale 1,2,4 respectively. PGD 2 , PGD 4 , PGD 8 refer to PGD with perturbation scale 2,4,8 and step number 4,6,10 respectively. The step size of all PGD methods are set to 1.

The left three columns are the accuracy of classifying randomly shuffled test images. The rightmost column is the accuracy of training on ImageNet and testing on Stylized-ImageNet. 0.1-Bottom mean applying defective convolutional layers with keep probability 0.1 to the bottom layers of the network whose name lies just above them.

A.10 EXPERIMENTAL DETAILS FORSECTION 4.3    In this subsection, we will show more experimental results on defective CNNs using different adversarial examples, different attack methods and different mask settings on ResNet-18. The networks used to generate adversarial examples including SENet18,. More specifically, we choose 5000 samples to generate adversarial examples via FGSM and PGD, and 1000 samples for CW attack. All samples are drawn from the validation set of CIFAR-10 dataset and can be correctly classified correctly by the model used to generate adversarial examples.For FGSM, we try step size ∈ {8, 16, 32}, namely FGSM 8 , FGSM 16 , FGSM 32 , to generate adversarial examples. For PGD, we have tried more extensive settings. Let { , T, α} be the PGD setting with step size , the number of steps T and the perturbation scale α, then we have tried PGD to generate PGD adversarial examples. From the experimental results, we observe the following phenomena. First, we find that the larger the perturbation scale is, the stronger the adversarial examples are. Second, for a fixed perturbation scale, the smaller the step size is, the more successful the attack is, as it searches the adversarial examples in a more careful way around the original image. Based on these observation, we only show strong PGD attack results in the Appendix, namely the settings (1, 20, 16) (PGD 16 ), (2, 10, 16) (PGD 2,16

κ}, Architecture FGSM 8 FGSM 16 FGSM 32 PGD 16 PGD 2,16 PGD 32 CW 40 Acc

Extended experimental results of Section 4.3. Adversarial examples generated against DenseNet-121. Numbers in the middle mean the success defense rates. The model trained on CIFAR-10 achieves 95.62% accuracy on test set. p-Bottom, p-Top, p-Bottom DC , p-Bottom SM , p-Bottom ×n and p-Bottom EN mean applying defective layers with keep probability p to bottom layers, applying defective layers with keep probability p to top layers, making whole channels defective with keep probability p, using the same defective mask in every channel with keep probability p, increasing channel number to n times at bottom layers and ensemble five models with different defective masks of the same keep probability p respectively. FGSM 16 FGSM 32 PGD 16 PGD 2,16 PGD 32 CW 20 Acc

Extended experimental results of Section 4.3. Numbers in the middle mean the success defense rates. Adversarial examples are generated against ResNet-18. The model trained on CIFAR-10 achieves 95.27% accuracy on test set. p-Bottom, p-Top, p-Bottom DC , p-Bottom SM , p-Bottom ×n and p-Bottom EN mean applying defective layers with keep probability p to bottom layers, applying defective layers with keep probability p to top layers, making whole channels defective with keep probability p, using the same defective mask in every channel with keep probability p, increasing channel number to n times at bottom layers and ensemble five models with different defective masks of the same keep probability p respectively. FGSM 8 FGSM 16 FGSM 32 PGD 16 PGD 2,16 PGD 32 CW 40 Acc

Extended experimental results of Section 4.3. Adversarial examples are generated against ResNet-50. Numbers in the middle mean the success defense rates. The model trained on CIFAR-10 achieves 95.69% accuracy on test set. p-Bottom, p-Top, p-Bottom DC , p-Bottom SM , p-Bottom ×n and p-Bottom EN mean applying defective layers with keep probability p to bottom layers, applying defective layers with keep probability p to top layers, making whole channels defective with keep probability p, using the same defective mask in every channel with keep probability p, increasing channel number to n times at bottom layers and ensemble five models with different defective masks of the same keep probability p respectively. FGSM 8 FGSM 16 FGSM 32 PGD 16 PGD 2,16 PGD 32 CW 40 Acc

Extended experimental results of Section 4.3. Numbers in the middle mean the success defense rates. Adversarial examples are generated against SENet-18. The model trained on CIFAR-10 achieves 95.15% accuracy on test set. p-Bottom, p-Top, p-Bottom DC , p-Bottom SM , p-Bottom ×n and p-Bottom EN mean applying defective layers with keep probability p to bottom layers, applying defective layers with keep probability p to top layers, making whole channels defective with keep probability p, using the same defective mask in every channel with keep probability p, increasing channel number to n times at bottom layers and ensemble five models with different defective masks of the same keep probability p respectively.

C ARCHITECTURE ILLUSTRATIONS

In this subsection, we briefly introduce the network architectures used in our experiments. Generally, we apply defective convolutional layers to the bottom layers of the networks and we have tried six different architectures, namely . We next illustrate these architectures and show how we apply defective convolutional layers to them. In our experiments, applying defective convolutional layers to a block means randomly selecting defective neurons in every layer of the block.

C.1 RESNET-18

ResNet-18 (He et al., 2016) contains 5 blocks: the 0 th block is one single 3 × 3 convolutional layer, and each of the rest contains four 3 × 3 convolutional layers. Figure 8 shows the whole structure of ResNet-18. In our experiments, we apply defective convolutional layers to the 0 th , 1 st , 2 nd blocks which are the bottom layers. Similar to ResNet-18, ResNet-50 (He et al., 2016) contains 5 blocks and each block contains several 1 × 1 and 3 × 3 convolutional layers (i.e. Bottlenecks). In our experiment, we apply defective convolutional layers to the 3×3 convolutional layers in the first three "bottom" blocks. The defective layers in the 1 st block are marked by the red arrows in Figure 9 . 

