ON GRADIENT DESCENT CONVERGENCE BEYOND THE EDGE OF STABILITY Anonymous authors Paper under double-blind review

Abstract

Gradient Descent (GD) is a powerful workhorse of modern machine learning thanks to its scalability and efficiency in high-dimensional spaces. Its ability to find local minimisers is only guaranteed for losses with Lipschitz gradients, where it can be seen as a 'bona-fide' discretisation of an underlying gradient flow. Yet, many ML setups involving overparametrised models do not fall into this problem class, which has motivated research beyond the so-called "Edge of Stability" (EoS), where the step-size crosses the admissibility threshold inversely proportional to the Lipschitz constant above. Perhaps surprisingly, GD has been empirically observed to still converge regardless of local instability and oscillatory behavior. The incipient theoretical analysis of this phenomena has mainly focused in the overparametrised regime, where the effect of choosing a large learning rate may be associated to a 'Sharpness-Minimisation' implicit regularisation within the manifold of minimisers, under appropriate asymptotic limits. In contrast, in this work we directly examine the conditions for such unstable convergence, focusing on simple, yet representative, learning problems. Specifically, we characterize a local condition involving third-order derivatives that stabilizes oscillations of GD above the EoS, and leverage such property in a teacher-student setting, under population loss. Finally, focusing on Matrix Factorization, we establish a nonasymptotic 'Local Implicit Bias' of GD above the EoS, whereby quasi-symmetric initializations converge to symmetric solutions -where sharpness is minimum amongst all minimisers.

1. INTRODUCTION

Given a differentiable objective function f (θ), where θ ∈ R d is a high-dimensional parameter vector, the most basic and widely used optimization method is gradient descent (GD), defined as θ (t+1) = θ (t) -η∇ θ f (θ (t) ), ( ) where η is the learning rate. For all its widespread application across many different ML setups, a basic question remains: what are the convergence guarantees (even to a local minimiser) under typical objective functions, and how they depend on the (only) hyperaparameter η? In the modern context of large-scale ML applications, an additional key question is not only to understand whether or not GD converges to minimisers, but to which ones, since overparametrisation defines a whole manifold of global minimisers, all potentially enjoying drastically different generalisation performance. The sensible regime to start the analysis is η → 0, where GD inherits the local convergence properties of the Gradient Flow ODE via standard arguments from numerical integration. However, in the early phase of training, a large learning rate has been observed to result in better generalization (LeCun et al., 2012; Bjorck et al., 2018; Jiang et al., 2019; Jastrzebski et al., 2021) , where the extent of "large" is measured by comparing the learning rate η and the curvature of the loss landscape, measured with λ(θ) := λ max ∇ 2 θ f (θ) , the largest eigenvalue of the Hessian with respect to learnable parameters. Although one requires sup θ λ(θ) < 2/η to guarantee the convergence of GD (Bottou et al., 2018) to (local) minimisersfoot_0 , the work of (Cohen et al., 2020) noticed a remarkable phenomena in the context



One can replace the uniform curvature bound by sup θ;f (θ)≤f (θ (0) ) λ(θ).1

