FANTASTIC FOUR: DIFFERENTIABLE BOUNDS ON SINGULAR VALUES OF CONVOLUTION LAYERS

Abstract

In deep neural networks, the spectral norm of the Jacobian of a layer bounds the factor by which the norm of a signal changes during forward/backward propagation. Spectral norm regularizations have been shown to improve generalization, robustness and optimization of deep learning methods. Existing methods to compute the spectral norm of convolution layers either rely on heuristics that are efficient in computation but lack guarantees or are theoretically-sound but computationally expensive. In this work, we obtain the best of both worlds by deriving four provable upper bounds on the spectral norm of a standard 2D multi-channel convolution layer. These bounds are differentiable and can be computed efficiently during training with negligible overhead. One of these bounds is in fact the popular heuristic method of Miyato et al. ( 2018) (multiplied by a constant factor depending on filter sizes). Each of these four bounds can achieve the tightest gap depending on convolution filters. Thus, we propose to use the minimum of these four bounds as a tight, differentiable and efficient upper bound on the spectral norm of convolution layers. We show that our spectral bound is an effective regularizer and can be used to bound either the lipschitz constant or curvature values (eigenvalues of the Hessian) of neural networks. Through experiments on MNIST and CIFAR-10, we demonstrate the effectiveness of our spectral bound in improving generalization and provable robustness of deep networks.

1. INTRODUCTION

Bounding singular values of different layers of a neural network is a way to control the complexity of the model and has been used in different problems including robustness, generalization, optimization, generative modeling, etc. In particular, the spectral norm (the maximum singular value) of a layer bounds the factor by which the norm of the signal increases or decreases during both forward and backward propagation within that layer. If all singular values are all close to one, then the gradients neither explode nor vanish (Hochreiter, 1991; Hochreiter et al., 2001; Klambauer et al., 2017; Xiao et al., 2018) . Spectral norm regularizations/bounds have been used in improving the generalization (Bartlett et al., 2017; Long & Sedghi, 2020) , in training deep generative models (Arjovsky et al., 2017; Gulrajani et al., 2017; Tolstikhin et al., 2018; Miyato et al., 2018; Hoogeboom et al., 2020) and in robustifying models against adversarial attacks (Singla & Feizi, 2020; Szegedy et al., 2014; Peck et al., 2017; Zhang et al., 2018; Anil et al., 2018; Hein & Andriushchenko, 2017; Cisse et al., 2017) . These applications have resulted in multiple works to regularize neural networks by penalizing the spectral norm of the network layers (Drucker & Le Cun, 1992; Yoshida & Miyato, 2017; Miyato et al., 2018; 2017; Sedghi et al., 2019; Singla & Feizi, 2020) . For a fully connected layer with weights W and bias b, the lipschitz constant is given by the spectral norm of the weight matrix i.e, W 2 , which can be computed efficiently using the power iteration method (Golub & Van Loan, 1996) . In particular, if the matrix W is of size p × q, the computational complexity of power iteration (assuming convergence in constant number of steps) is O(pq). c out × c in × h × w where c out , c in , h and w denote the number of output channels, input channels, height and width of the filter respectively; and a square input sample of size c in × n × n where n is its height and width. A naive representation of the Jacobian of this layer will result in a matrix of size n 2 c out × n 2 c in . For a typical convolution layer with the filter size 64 × 3 × 7 × 7 and an ImageNet sized input 3 × 224 × 224 (Krizhevsky et al., 2012) , the corresponding jacobian matrix has a very large size: 802816 × 150528. This makes an explicit computation of the jacobian infeasible. Ryu et al. ( 2019) provide a way to compute the spectral norm of convolution layers using convolution and transposed convolution operations in power iteration, thereby avoiding this explicit computation. This leads to an improved running time especially when the number of input/output channels is small (Table 1 ). However, in addition to the running time, there is an additional difficulty in the approach proposed in Ryu et al. ( 2019) (and other existing approaches described later) regarding the computation of the spectral norm gradient (often used as a regularization during the training). The gradient of the largest singular value with respect to the jacobian can be naively computed by taking the outer product of corresponding singular vectors. However, due to the special structure of the convolution operation, the jacobian will be a sparse matrix with repeated elements (see Appendix Section D for details). The naive computation of the gradient will result in non-zero gradient values at elements that should be in fact zeros throughout training and also will assign different gradient values at elements that should always be identical. These issues make the gradient computation of the spectral norm with respect to the convolution filter weights using the technique of Ryu et al. ( 2019) difficult. Recently, Sedghi et al. ( 2019) provided a principled approach for exactly computing the singular values of convolution layers. They construct n 2 matrices each of size c out × c in by taking the Fourier transform of the convolution filter (details in Appendix Section B). The set of singular values of the jacobian equals the union of singular values of these n 2 matrices. However, this method can have high computational complexity since it requires SVD of n 2 matrices. Although this method can be adapted to compute the spectral norm of n 2 matrices using power iteration (in parallel with a GPU implementation) instead of full SVD, the intrinsic computational complexity (discussed in Table 2 ) can make it difficult to use this approach for very deep networks and large input sizes especially when computational resources are limited. Moreover, computing the gradient of the spectral norm using this method is not straightforward since each of these n 2 matrices contain complex numbers. Thus, Sedghi et al. (2019) suggests to clip the singular values if they are above a certain threshold to bound the spectral norm of the layer. In order to reduce the training overhead, they clip the singular values only after every 100 iterations. The resulting method reduces the training overhead but is still costly for large input sizes and very deep networks. We report the running time of this method in Table 1 and its training time for one epoch (using 1 GPU implementation) in Table 4c . Because of the aforementioned issues, efficient methods to control the spectral norm of convolution layers have resorted to heuristics (Yoshida & Miyato, 2017; Miyato et al., 2018; Gouk et al., 2018) . Typically, these methods reshape the convolution filter of dimensions c out × c in × h × w to construct a matrix of dimensions c out × hwc in , and use the spectral norm of this matrix as an estimate of the spectral norm of the convolution layer. To regularize during training, they use the outer product of the corresponding singular vectors as the gradient of the largest singular value with respect to the reshaped matrix. Since the weights do not change significantly during each training step, they use only one iteration of power method during each step to update the singular values and vectors (using the singular vectors computed in the previous step). These methods result in negligible overhead during the training. However, due to lack of theoretical justifications (which we resolve in this work), they are not guaranteed to work for all different shapes and weights of the convolution filter. Previous studies have observed under estimation of the spectral norm using these heuristics (Jiang et al., 2019) . On one hand, there are computationally efficient but heuristic ways of computing and bounding the spectral norm of convolutional layers (Miyato et al., 2017; 2018) . On the other hand, the exact computation of the spectral norm of convolutional layers proposed by Sedghi et al. (2019); Ryu et al. (2019) can be expensive for commonly used architectures especially with large inputs such as ImageNet samples. Moreover, the difficulty in computing the gradient of the spectral norm with respect to the jacobian under these methods make their use as regularization during the training process challenging.

