PATH REGULARIZATION: A CONVEXITY AND SPAR-SITY INDUCING REGULARIZATION FOR PARALLEL RELU NETWORKS

Abstract

Understanding the fundamental principles behind the success of deep neural networks is one of the most important open questions in the current literature. To this end, we study the training problem of deep neural networks and introduce an analytic approach to unveil hidden convexity in the optimization landscape. We consider a deep parallel ReLU network architecture, which also includes standard deep networks and ResNets as its special cases. We then show that pathwise regularized training problems can be represented as an exact convex optimization problem. We further prove that the equivalent convex problem is regularized via a group sparsity inducing norm. Thus, a path regularized parallel ReLU network can be viewed as a parsimonious convex model in high dimensions. More importantly, since the original training problem may not be trainable in polynomial-time, we propose an approximate algorithm with a fully polynomial-time complexity in all data dimensions. Then, we prove strong global optimality guarantees for this algorithm. We also provide experiments corroborating our theory. (a) 2-layer NN with WD (b) 3-layer NN with WD (c) 3-layer NN with PR (Ours)

1. INTRODUCTION

Deep Neural Networks (DNNs) have achieved substantial improvements in several fields of machine learning. However, since DNNs have a highly nonlinear and non-convex structure, the fundamental principles behind their remarkable performance is still an open problem. Therefore, advances in this field largely depend on heuristic approaches. One of the most prominent techniques to boost the generalization performance of DNNs is regularizing layer weights so that the network can fit a function that performs well on unseen test data. Even though weight decay, i.e., penalizing the 2 2norm of the layer weights, is commonly employed as a regularization technique in practice, recently, it has been shown that 2 -path regularizer (Neyshabur et al., 2015b) , i.e., the sum over all paths in the network of the squared product over all weights in the path, achieves further empirical gains (Neyshabur et al., 2015a) . Therefore, in this paper, we investigate the underlying mechanisms behind path regularized DNNs through the lens of convex optimization.  . . . j2 . . . j3 x w 1 j1 w 2 j 1 j 2 w3j2 j3 Input Sub-network 1 Sub-network 2 . . . Sub-network K Output Parallel ReLU Network fθ(X) = K k=1 (XW1k) + W2k + w3k Path Regularization R(θ) = j1,j2, Goel et al. (2021) 2 -loss 2 O( m 5 1 2 ) poly(n, d) - (NP-hard) Froese et al. (2021) p -loss O 2 m1 n dm1 poly(n, d, m 1 ) - (Brute-force) Pilanci & Ergen (2020) Convex loss O(n r poly(d, r)) - (Convex, exact) Ours Convex loss O(n r poly(d, r)) O n r L-2 j=1 mj poly(d, r, L-2 j=1 m l ) (Convex, exact) Ours Convex loss O(n κ poly(d, κ)) O n κ L-2 j=1 mj poly(d, κ, L-2 j=1 m l ) (Convex, -opt) 2 PARALLEL NEURAL NETWORKS Although DNNs are highly complex architectures due to the composition of multiple nonlinear functions, their parameters are often trained via simple first order gradient based algorithms, e.g., Gradient Descent (GD) and variants. However, since such algorithms only rely on local gradient of the objective function, they may fail to globally optimize the objective in certain cases (Shalev-Shwartz et al., 2017; Goodfellow et al., 2016 ). Similarly, Ge et al. (2017); Safran & Shamir (2018) showed that these pathological cases also apply to stochastic algorithms such as Stochastic GD (SGD). They further show that some of these issues can be avoided by increasing the number of trainable parameters, i.e., operating in an overparameterized regime. However, Anandkumar & Ge (2016) reported the existence of more complicated cases, where SGD/GD usually fails. Therefore, training DNNs to global optimality remains a challenging optimization problem (DasGupta et al., 1995; Blum & Rivest, 1989; Bartlett & Ben-David, 1999) . To circumvent difficulties in training, recent studies focused on models that benefit from overparameterization (Brutzkus et al., 2017; Du & Lee, 2018; Arora et al., 2018b; Neyshabur et al., 2018) Notation and preliminaries: Throughout the paper, we denote matrices and vectors as uppercase and lowercase bold letters, respectively. For vectors and matrices, we use subscripts to denote a certain column/element. As an example, w lkj l-1 j l denotes the j l-1 j th l entry of the matrix W lk . We use I k and 0 (or 1) to denote the identity matrix of size k × k and a vector/matrix of zeros (or ones)



Figure 1: Decision boundaries of 2-layer and 3-layer ReLU networks that are globally optimized with weight decay (WD) and path regularization (PR). Here, our convex training approach in (c) successfully learns the underlying spiral pattern for each class while the previously studied convex models in (a) and (b) fail (see Appendix A.1 for details).

j1

Figure 2: (Left): Parallel ReLU network in (1) with K sub-networks and three layers (L = 3) (Right): Path regularization for a three-layer network.



