HOW DEEP CONVOLUTIONAL NEURAL NETWORKS LOSE SPATIAL INFORMATION WITH TRAINING

Abstract

A central question of machine learning is how deep nets manage to learn tasks in high dimensions. An appealing hypothesis is that they achieve this feat by building a representation of the data where information irrelevant to the task is lost. For image datasets, this view is supported by the observation that after (and not before) training, the neural representation becomes less and less sensitive to diffeomorphisms acting on images as the signal propagates through the net. This loss of sensitivity correlates with performance, and surprisingly correlates with a gain of sensitivity to white noise acquired during training. These facts are unexplained, and as we demonstrate still hold when white noise is added to the images of the training set. Here, we (i) show empirically for various architectures that stability to image diffeomorphisms is achieved by both spatial and channel pooling, (ii) introduce a model scale-detection task which reproduces our empirical observations on spatial pooling and (iii) compute analitically how the sensitivity to diffeomorphisms and noise scales with depth due to spatial pooling. The scalings are found to depend on the presence of strides in the net architecture. We find that the increased sensitivity to noise is due to the perturbing noise piling up during pooling, after being rectified by ReLU units.

1. INTRODUCTION

Deep learning algorithms can be successfully trained to solve a large variety of tasks (Amodei et al., 2016; Huval et al., 2015; Mnih et al., 2013; Shi et al., 2016; Silver et al., 2017) , often revolving around classifying data in high-dimensional spaces. If there was little structure in the data, the learning procedure would be cursed by the dimension of these spaces: achieving good performances would require an astronomical number of training data (Luxburg & Bousquet, 2004) . Consequently, real datasets must have a specific internal structure that can be learned with fewer examples. It has been then hypothesized that the effectiveness of deep learning lies in its ability of building 'good' representations of this internal structure, which are insensitive to aspects of the data not related to the task (Ansuini et al., 2019; Shwartz-Ziv & Tishby, 2017; Recanatesi et al., 2019) , thus effectively reducing the dimensionality of the problem. In the context of image classification, Bruna & Mallat (2013) ; Mallat (2016) proposed that neural networks lose irrelevant information by learning representations that are insensitive to small deformations of the input, also called diffeomorphisms. This idea was tested in modern deep networks by Petrini et al. (2021) , who introduced the following measures D f = E x,τ ∥f (τ (x)) -f (x)∥ 2 E x1,x2 ∥f (x 1 ) -f (x 2 )∥ 2 , G f = E x,η ∥f (x + η) -f (x)∥ 2 E x1,x2 ∥f (x 1 ) -f (x 2 )∥ 2 , R f = D f G f , to probe the sensitivity of a function f -either the output or an internal representation of a trained network-to random diffeomorphisms τ of x (see example in Fig. 1 , left), to large white noise perturbations η of magnitude ∥τ (x) -x∥, and in relative terms, respectively. Here the input images x, x 1 and x 2 are sampled uniformly from the test set. In particular, the test error of trained networks is correlated with D f when f is the network output. Less intuitively, the test error is anti-correlated with the sensitivity to white noise G f . Overall, it is the relative sensitivity R f which correlates best with the error (Fig. 1 , middle). This correlation is learned over training-as it is not seen at initialization-and built up layer by layer (Petrini et al., 2021) . These phenomena are not simply due to benchmark data being noiseless, as they persist when input images are corrupted by some small noise (Fig. 1 , right). Operations that grant insensitivity to diffeomorphisms in a deep network have been identified previously (e.g. Goodfellow et al. ( 2016), section 9.3, sketched in Fig. 2 ). The first, spatial pooling, integrates local patches within the image, thus losing the exact location of its features. The second, channel pooling, requires the interaction of different channels, which allows the network to become invariant to any local transformation by properly learning filters that are transformed versions of one another. However, it is not clear whether these operations are actually learned by deep networks and how they conspire in building good representations. Here we tackle this question by unveiling empirically the emergence of spatial and channel pooling, and disentangling their role. Below is a detailed list of our contributions. 1.1 OUR CONTRIBUTIONS • We disentangle the role of spatial and channel pooling within deep networks trained on CIFAR10 (Section 2). More specifically, our experiments reveal the significant contribution of spatial pooling in decreasing the sensitivity to diffeomorphisms.



Figure 1: Left: example of a random diffeomorphism τ applied to an image. Center: test error vs relative sensitivity to diffeomorphisms of the predictor for a set of networks trained on CIFAR10, adapted from Petrini et al. (2021). Right: Correlation coefficient between test error ϵ and D f , G f and R f when training different architectures on noisy CIFAR10, ρ(ϵ, X) = Cov(log ϵ, log X)/ Var(log ϵ)Var(log X). Increasing noise magnitudes are shown on the x-axis and η * = E τ,x ∥τ (x) -x∥ 2 is the one used for the computation of G f . Samples of a noisy CIFAR10 datum are shown on top. Notice that D f and particularly R f are positively correlated with ϵ, whilst G f is negatively correlated with ϵ. The corresponding scatter plots are in Fig. 10 (appendix).

