FANTASTIC FOUR: DIFFERENTIABLE BOUNDS ON SINGULAR VALUES OF CONVOLUTION LAYERS

Abstract

In deep neural networks, the spectral norm of the Jacobian of a layer bounds the factor by which the norm of a signal changes during forward/backward propagation. Spectral norm regularizations have been shown to improve generalization, robustness and optimization of deep learning methods. Existing methods to compute the spectral norm of convolution layers either rely on heuristics that are efficient in computation but lack guarantees or are theoretically-sound but computationally expensive. In this work, we obtain the best of both worlds by deriving four provable upper bounds on the spectral norm of a standard 2D multi-channel convolution layer. These bounds are differentiable and can be computed efficiently during training with negligible overhead. One of these bounds is in fact the popular heuristic method of Miyato et al. ( 2018) (multiplied by a constant factor depending on filter sizes). Each of these four bounds can achieve the tightest gap depending on convolution filters. Thus, we propose to use the minimum of these four bounds as a tight, differentiable and efficient upper bound on the spectral norm of convolution layers. We show that our spectral bound is an effective regularizer and can be used to bound either the lipschitz constant or curvature values (eigenvalues of the Hessian) of neural networks. Through experiments on MNIST and CIFAR-10, we demonstrate the effectiveness of our spectral bound in improving generalization and provable robustness of deep networks.

1. INTRODUCTION

Bounding singular values of different layers of a neural network is a way to control the complexity of the model and has been used in different problems including robustness, generalization, optimization, generative modeling, etc. In particular, the spectral norm (the maximum singular value) of a layer bounds the factor by which the norm of the signal increases or decreases during both forward and backward propagation within that layer. If all singular values are all close to one, then the gradients neither explode nor vanish (Hochreiter, 1991; Hochreiter et al., 2001; Klambauer et al., 2017; Xiao et al., 2018) . Spectral norm regularizations/bounds have been used in improving the generalization (Bartlett et al., 2017; Long & Sedghi, 2020) , in training deep generative models (Arjovsky et al., 2017; Gulrajani et al., 2017; Tolstikhin et al., 2018; Miyato et al., 2018; Hoogeboom et al., 2020) and in robustifying models against adversarial attacks (Singla & Feizi, 2020; Szegedy et al., 2014; Peck et al., 2017; Zhang et al., 2018; Anil et al., 2018; Hein & Andriushchenko, 2017; Cisse et al., 2017) . These applications have resulted in multiple works to regularize neural networks by penalizing the spectral norm of the network layers (Drucker & Le Cun, 1992; Yoshida & Miyato, 2017; Miyato et al., 2018; 2017; Sedghi et al., 2019; Singla & Feizi, 2020) . For a fully connected layer with weights W and bias b, the lipschitz constant is given by the spectral norm of the weight matrix i.e, W 2 , which can be computed efficiently using the power iteration method (Golub & Van Loan, 1996) . In particular, if the matrix W is of size p × q, the computational complexity of power iteration (assuming convergence in constant number of steps) is O(pq). Convolution layers (Lecun et al., 1998) are one of the key components of modern neural networks, particularly in computer vision (Krizhevsky et al., 2012) . Consider a convolution filter L of size

