THE EFFICACY OF L 1 REGULARIZATION IN NEURAL NETWORKS

Abstract

A crucial problem in neural networks is to select the most appropriate number of hidden neurons and obtain tight statistical risk bounds. In this work, we present a new perspective towards the bias-variance tradeoff in neural networks. As an alternative to selecting the number of neurons, we theoretically show that L 1 regularization can control the generalization error and sparsify the input dimension. In particular, with an appropriate L 1 regularization on the output layer, the network can produce a statistical risk that is near minimax optimal. Moreover, an appropriate L 1 regularization on the input layer leads to a risk bound that does not involve the input data dimension. Our analysis is based on a new amalgamation of dimension-based and norm-based complexity analysis to bound the generalization error. A consequent observation from our results is that an excessively large number of neurons do not necessarily inflate generalization errors under a suitable regularization.

1. INTRODUCTION

Neural networks have been successfully applied in modeling nonlinear regression functions in various domains of applications. A critical evaluation metric for a predictive learning model is to measure its statistical risk bound. For example, the L 1 or L 2 risks of typical parametric models such as linear regressions are at the order of (d/n) 1/2 for small d (Seber & Lee, 2012), where d and n denote respectively the input dimension and number of observations. Obtaining the risk bound for a nonparametric regression model such as neural networks is highly nontrivial. It involves an approximation error (or bias) term as well as a generalization error (or variance) term. The standard analysis of generalization error bounds may not be sufficient to describe the overall predictive performance of a model class unless the data is assumed to be generated from it. For the model class of two-layer feedforward networks and a rather general data-generating process, Barron (1993; 1994) proved an approximation error bound of O(r -1/2 ) where r denotes the number of neurons. The author further developed a statistical risk error bound of O((d/n) 1/4 ), which is the tightest statistical risk bound for the class of two-layer neural networks up to the authors' knowledge (for d < n). This risk bound is based on an optimal bias-variance tradeoff involving an deliberate choice of r. Note that the risk is at a convergence rate much slower than the classical parametric rate. We will tackle the same problem from a different perspective, and obtain a much tighter risk bound. A practical challenge closely related to statistical risks is to select the most appropriate neural network architecture for a particular data domain (Ding et al., 2018) . For two-layer neural networks, this is equivalent to selecting the number of hidden neurons r. While a small r tends to underfit, researchers have observed that the network is not overfitting even for moderately large r. Nevertheless, recent research has also shown that an overly large r (e.g., when r > n) does cause overfitting with high probability (Zhang et al., 2016) . It can be shown under some non-degeneracy conditions that a two-layer neural network with more than n hidden neurons can perfectly fit n arbitrary data, even in the presence of noise, which inevitably leads to overfitting. A theoretical choice of r suggested by the asymptotic analysis in (Barron, 1994) is at the order of (n/d) 1/2 , and a practical choice of r is often from cross-validation with an appropriate splitting ratio (Ding et al., 2018) . An alternative perspective that we advocate is to learn from a single neural network with sufficiently many neurons and an appropriate L 1 regularization on the neuron coefficients, instead of performing a selection from multiple candidate neural models. A potential benefit of this approach is easier hardware implementation and computation since we do not need to implement multiple models separately. Perhaps more importantly, this perspective of training enables much tighter risk bounds, as we will demonstrate. In this work, we focus on the model class of two-layer feedforward neural networks. Our main contributions are summarized below. First, we prove that L 1 regularization on the coefficients of the output layer can produce a risk bound O((d/n) 1/2 ) (up to a logarithmic factor) under the L 1 training loss, which approaches the minimax optimal rate. Such a rate has not been established under the L 2 training loss so far. The result indicates a potential benefit of using L 1 regularization for training a neural network, instead of selecting from a number of neurons. Additionally, a key ingredient of our result is a unique amalgamation of dimension-based and norm-based risk analysis, which may be interesting on its own right. The technique leads to an interesting observation that an excessively large r can reduce approximation error while not increasing generalization error under L 1 regularizations. This implies that an explicit regularization can eliminate overfitting even when the specified number of neurons is enormous. Moreover, we prove that the L 1 regularization on the input layer can induce sparsity by producing a risk bound that does not involve d, where d may be much larger compared with the true number of significant variables. Related work on neural network analysis. Despite the practical success of neural networks, a systematic understanding of their theoretical limit remains an ongoing challenge and has motivated research from various perspectives. Cybenko (1989) showed that any continuous function could be approximated arbitrarily well by a two-layer perceptron with sigmoid activation functions. Barron (1993; 1994) established an approximation error bound of using two-layer neural networks to fit arbitrary smooth functions and their statistical risk bounds. A dimension-free Rademacher complexity for deep ReLU neural networks was recently developed (Golowich et al., 2017; Barron & Klusowski, 2019) . Based on a contraction lemma, a series of norm-based complexities and their corresponding generalization errors are developed (Neyshabur et al., 2015 , and the references therein). Another perspective is to assume that the data are generated by a neural network and convert its parameter estimation into a tensor decomposition problem through the score function of the known or estimated input distribution (Anandkumar et al., 2014; Janzamin et al., 2015; Ge et al., 2017; Mondelli & Montanari, 2018) . Also, tight error bounds have been established recently by assuming that neural networks of parsimonious structures generate the data. In this direction, Schmidt-Hieber (2017) proved that specific deep neural networks with few non-zero network parameters can achieve minimax rates of convergence. Bauer & Kohler (2019) developed an error bound that is free from the input dimension, by assuming a generalized hierarchical interaction model. Related work on L 1 regularization. The use of L 1 regularization has been widely studied in linear regression problems (Hastie et al., 2009, Chapter 3) . The use of L 1 regularization for training neural networks has been recently advocated in deep learning practice. A prominent use of L 1 regularization was to empirically sparsify weight coefficients and thus compress a network that requires intensive memory usage (Cheng et al., 2017) . The extension of L 1 regularization to group-L 1 regularization (Yuan & Lin, 2006) has also been extensively used in learning various neural networks (Han et al., 2015; Zhao et al., 2015; Wen et al., 2016; Scardapane et al., 2017) . Despite the above practice, the efficacy of L 1 regularization in neural networks deserves more theoretical study. In the context of two-layer neural networks, we will show that the L 1 regularizations in the output and input layers play two different roles: the former for reducing generalization error caused by excessive neurons while the latter for sparsifying input signals in the presence of substantial redundancy. Unlike previous theoretical work, we consider the L 1 loss, which ranks among the most popular loss functions in, e.g., learning from ordinal data (Pedregosa et al., 2017) or imaging data (Zhao et al., 2016) , and for which the statistical risk has not been studied previously. In practice, the use of L 1 loss for training has been implemented in prevalent computational frameworks such as Tensorflow (Google, 2016 ), Pytorch (Ketkar, 2017 ), and Keras (Gulli & Pal, 2017) .

2.1. MODEL ASSUMPTION AND EVALUATION

Suppose we have n labeled observations {(x i , y i )} i=1,...,n , where y i 's are continuously-valued responses or labels. We assume that the underlying data generating model is y i = f * (x i ) + ε i for some unknown function f * (•), where x i 's ∈ X ⊂ R d are independent and identically distributed,

