DEEP NETWORKS FROM THE PRINCIPLE OF RATE REDUCTION Anonymous authors Paper under double-blind review

Abstract

This work attempts to interpret modern deep (convolutional) networks from the principles of rate reduction and (shift) invariant classification. We show that the basic iterative gradient ascent scheme for maximizing the rate reduction of learned features naturally leads to a deep network, one iteration per layer. The architectures, operators (linear or nonlinear), and parameters of the network are all explicitly constructed layer-by-layer in a forward propagation fashion. All components of this "white box" network have precise optimization, statistical, and geometric interpretation. Our preliminary experiments indicate that such a network can already learn a good discriminative deep representation without any back propagation training. Moreover, all linear operators of the so-derived network naturally become multi-channel convolutions when we enforce classification to be rigorously shift-invariant. The derivation also indicates that such a convolutional network is significantly more efficient to learn and construct in the spectral domain.

1. INTRODUCTION AND MOTIVATION

In recent years, various deep (convolution) network architectures such as AlexNet (Krizhevsky et al., 2012) , VGG (Simonyan & Zisserman, 2015) , ResNet (He et al., 2016) , DenseNet (Huang et al., 2017) , Recurrent CNN, LSTM (Hochreiter & Schmidhuber, 1997 ), Capsule Networks (Hinton et al., 2011) , etc., have demonstrated very good performance in classification tasks of real-world datasets such as speeches or images. Nevertheless, almost all such networks are developed through years of empirical trial and error, including both their architectures/operators and the ways they are to be effectively trained. Some recent practices even take to the extreme by searching for effective network structures and training strategies through extensive random search techniques, such as Neural Architecture Search (Zoph & Le, 2017; Baker et al., 2017 ), AutoML (Hutter et al., 2019) , and Learning to Learn (Andrychowicz et al., 2016) . Despite tremendous empirical advances, there is still a lack of rigorous theoretical justification of the need or reasons for "deep" network architectures and a lack of fundamental understanding of the associated operators (e.g. multi-channel convolution and nonlinear activation) in each layer. As a result, deep networks are often designed and trained heuristically and then used as a "black box." There have been a severe lack of guiding principles for each of the stages: For a given task, how wide or deep the network should be? What are the roles and relationships among the multiple (convolution) channels? Which parts of the networks need to be learned and trained and which can be determined in advance? How to evaluate the optimality of the resulting network? As a consequence, besides empirical evaluation, it is usually impossible to offer any rigorous guarantees for certain performance of a trained network, such as invariance to transformation (Azulay & Weiss, 2018; Engstrom et al., 2017) or overfitting noisy or even arbitrary labels (Zhang et al., 2017) . In this paper, we do not intend to address all these questions but we would attempt to offer a plausible interpretation of deep (convolution) neural networks by deriving a class of deep networks from first principles. We contend that all key features and structures of modern deep (convolution) neural networks can be naturally derived from optimizing a principled objective, namely the rate reduction recently proposed by Yu et al. (2020) , that seeks a compact discriminative (invariant) representation of the data. More specifically, the basic iterative gradient ascent scheme for optimizing the objective naturally takes the form of a deep neural network, one layer per iteration. This principled approach brings a couple of nice surprises: First, architectures, operators, and parameters of the network can be constructed explicitly layer-by-layer in a forward propagation fashion, and all inherit precise optimization, statistical and geometric interpretation. As result, the so constructed "white box" deep network already gives a good discriminative representation (and achieves good classification performance) without any back propagation for training the deep network. Second, in the case of seeking a representation rigorously invariant to shift or translation, the network naturally lends itself to a multi-channel convolutional network. Moreover, the derivation indicates such a convolutional network is computationally more efficient to learn and construct in the spectral

