ON THE NEURAL TANGENT KERNEL OF EQUILIBRIUM MODELS

Abstract

This work studies the neural tangent kernel (NTK) of the deep equilibrium (DEQ) model, a practical "infinite-depth" architecture which directly computes the infinite-depth limit of a weight-tied network via root-finding. Even though the NTK of a fully-connected neural network can be stochastic if its width and depth both tend to infinity simultaneously, we show that contrarily a DEQ model still enjoys a deterministic NTK despite its width and depth going to infinity at the same time under mild conditions. Moreover, this deterministic NTK can be found efficiently via root-finding.

1. INTRODUCTION

Implicit models form a new class of machine learning models where instead of stacking explicit "layers", they output z s.t g(x, z) = 0, where g can be either a fixed point equation (Bai et al., 2019) , a differential equation (Chen et al., 2018b) , or an optimization problem (Gould et al., 2019) . This work focuses on deep equilibrium models, a class of models that effectively represent a "infinitedepth" weight-tied network with input injection. Specifically, let f θ be a network parameterized by θ, let x be an input injection, DEQ finds z * such that f (z * , x) = z * , and uses z * as the input for downstream tasks. One interesting question to ask is, what will DEQs become if their widths also go to infinity? It is well-known that at certain random initialization, neural networks of various structures converge to Gaussian processes as their widths go to infinity (Neal, 1996; Lee et al., 2017; Yang, 2019; Matthews et al., 2018; Novak et al., 2018; Garriga-Alonso et al., 2018) . Recent deep learning theory advances have also shown that in the infinite width limit, with proper initialization (the NTK initialization), training the network f θ with gradient descent is equivalent to solving kernel regression with respect to the neural tangent kernel (NTK) (Arora et al., 2019; Jacot et al., 2018; Yang, 2019; Huang et al., 2020) . These kernel regimes provide important insights to understanding how neural networks work. However, the infinite depth (denote depth as d) regime introduces several caveats. Since the NTK correlates with the infinite width (denote width as n) limit, a question naturally arises as how do we let n, d → ∞? Hanin & Nica (2019) proved that as long as d/n ∈ (0, ∞), the NTK of vanilla fully-connected neural network (FCNN) becomes stochastic. On the other hand, if we first take the n → ∞, then d → ∞ 1 , Jacot et al. (2019) showed that the NTK of a FCNN converges either to a constant (freeze), or to the Kronecker Delta (chaos). In this work, we prove that with proper initialization, the NTK-of-DEQ enjoys a limit exchanging property lim d→∞ lim n→∞ Θ (d) n (x, y) = lim n→∞ lim d→∞ Θ (d) n (x, y) with high probability, where Θ (d) n denotes the empirical NTK of a neural network with d layers and n neurons each layer. Intuitively, we name the left hand side "DEQ-of-NTK" and the right hand side "NTK-of-DEQ". The NTK-of-DEQ converges to meaningful deterministic fixed points that depend on the input in a non-trivial way, thus avoiding the freeze vs. chaos scenario. Furthermore, analogous to DEQ models, we can compute these kernels by solving fixed point equations, rather than iteratively applying the updates as for traditional NTK. We evaluate our approach and demonstrate that it matches the performance of existing regularized NTK methods. 1 The computed quantity is lim d→∞ limn→∞ Θ 



(d)    n (x, y).

