HOW DEEP CONVOLUTIONAL NEURAL NETWORKS LOSE SPATIAL INFORMATION WITH TRAINING

Abstract

A central question of machine learning is how deep nets manage to learn tasks in high dimensions. An appealing hypothesis is that they achieve this feat by building a representation of the data where information irrelevant to the task is lost. For image datasets, this view is supported by the observation that after (and not before) training, the neural representation becomes less and less sensitive to diffeomorphisms acting on images as the signal propagates through the net. This loss of sensitivity correlates with performance, and surprisingly correlates with a gain of sensitivity to white noise acquired during training. These facts are unexplained, and as we demonstrate still hold when white noise is added to the images of the training set. Here, we (i) show empirically for various architectures that stability to image diffeomorphisms is achieved by both spatial and channel pooling, (ii) introduce a model scale-detection task which reproduces our empirical observations on spatial pooling and (iii) compute analitically how the sensitivity to diffeomorphisms and noise scales with depth due to spatial pooling. The scalings are found to depend on the presence of strides in the net architecture. We find that the increased sensitivity to noise is due to the perturbing noise piling up during pooling, after being rectified by ReLU units.

1. INTRODUCTION

Deep learning algorithms can be successfully trained to solve a large variety of tasks (Amodei et al., 2016; Huval et al., 2015; Mnih et al., 2013; Shi et al., 2016; Silver et al., 2017) , often revolving around classifying data in high-dimensional spaces. If there was little structure in the data, the learning procedure would be cursed by the dimension of these spaces: achieving good performances would require an astronomical number of training data (Luxburg & Bousquet, 2004) . Consequently, real datasets must have a specific internal structure that can be learned with fewer examples. It has been then hypothesized that the effectiveness of deep learning lies in its ability of building 'good' representations of this internal structure, which are insensitive to aspects of the data not related to the task (Ansuini et al., 2019; Shwartz-Ziv & Tishby, 2017; Recanatesi et al., 2019) , thus effectively reducing the dimensionality of the problem. In the context of image classification, Bruna & Mallat (2013); Mallat (2016) proposed that neural networks lose irrelevant information by learning representations that are insensitive to small deformations of the input, also called diffeomorphisms. This idea was tested in modern deep networks by Petrini et al. (2021) , who introduced the following measures D f = E x,τ ∥f (τ (x)) -f (x)∥ 2 E x1,x2 ∥f (x 1 ) -f (x 2 )∥ 2 , G f = E x,η ∥f (x + η) -f (x)∥ 2 E x1,x2 ∥f (x 1 ) -f (x 2 )∥ 2 , R f = D f G f , to probe the sensitivity of a function f -either the output or an internal representation of a trained network-to random diffeomorphisms τ of x (see example in Fig. 1 , left), to large white noise perturbations η of magnitude ∥τ (x) -x∥, and in relative terms, respectively. Here the input images x, x 1 and x 2 are sampled uniformly from the test set. In particular, the test error of trained networks is correlated with D f when f is the network output. Less intuitively, the test error is anti-correlated with the sensitivity to white noise G f . Overall, it is the relative sensitivity R f which correlates best with the error (Fig. 1 , middle). This correlation is learned over training-as it is not seen at initialization-and built up layer by layer (Petrini et al., 2021) . These phenomena are not simply due

