Deep Equals Shallow for ReLU Networks in Kernel Regimes

Abstract

Deep networks are often considered to be more expressive than shallow ones in terms of approximation. Indeed, certain functions can be approximated by deep networks provably more efficiently than by shallow ones, however, no tractable algorithms are known for learning such deep models. Separately, a recent line of work has shown that deep networks trained with gradient descent may behave like (tractable) kernel methods in a certain over-parameterized regime, where the kernel is determined by the architecture and initialization, and this paper focuses on approximation for such kernels. We show that for ReLU activations, the kernels derived from deep fully-connected networks have essentially the same approximation properties as their "shallow" two-layer counterpart, namely the same eigenvalue decay for the corresponding integral operator. This highlights the limitations of the kernel framework for understanding the benefits of such deep architectures. Our main theoretical result relies on characterizing such eigenvalue decays through differentiability properties of the kernel function, which also easily applies to the study of other kernels defined on the sphere.

1. Introduction

The question of which functions can be well approximated by neural networks is crucial for understanding when these models are successful, and has always been at the heart of the theoretical study of neural networks (e.g., Hornik et al., 1989; Pinkus, 1999) . While early works have mostly focused on shallow networks with only two layers, more recent works have shown benefits of deep networks for approximating certain classes of functions (Eldan & Shamir, 2016; Mhaskar & Poggio, 2016; Telgarsky, 2016; Daniely, 2017; Yarotsky, 2017; Schmidt-Hieber et al., 2020) . Unfortunately, many of these approaches rely on constructions that are not currently known to be learnable using efficient algorithms. A separate line of work has considered over-parameterized networks with random neurons (Neal, 1996) , which also display universal approximation properties while additionally providing efficient algorithms based on kernel methods or their approximations such as random features (Rahimi & Recht, 2007; Bach, 2017b) . Many recent results on gradient-based optimization of certain over-parameterized networks have been shown to be equivalent to kernel methods with an architecture-specific kernel called the neural tangent kernel (NTK) and thus also fall in this category (e.g., Jacot et al., 2018; Li & Liang, 2018; Allen-Zhu et al., 2019b; Du et al., 2019a; b; Zou et al., 2019) . This regime has been coined lazy (Chizat et al., 2019) , as it does not capture the common phenomenon where weights move significantly away from random initialization and thus may not provide a satisfying model for learning adaptive representations, in contrast to other settings such as the mean field or active regime, which captures complex training dynamics where weights may move in a non-trivial manner and adapt to the data (e.g., Chizat & Bach, 2018; Mei et al., 2018) . Nevertheless,

