SHARPER RATES AND FLEXIBLE FRAMEWORK FOR NONCONVEX SGD WITH CLIENT AND DATA SAMPLING Anonymous authors Paper under double-blind review

Abstract

We revisit the classical problem of finding an approximately stationary point of the average of n smooth and possibly nonconvex functions. The optimal complexity of stochastic first-order methods in terms of the number of gradient evaluations of individual functions is O n + n 1/2 ε -1 , attained by the optimal SGD methods SPIDER (Fang et al., 2018) and PAGE (Li et al., 2021), for example, where ε is the error tolerance. However, i) the big-O notation hides crucial dependencies on the smoothness constants associated with the functions, and ii) the rates and theory in these methods assume simplistic sampling mechanisms that do not offer any flexibility. In this work we remedy the situation. First, we generalize the PAGE algorithm so that it can provably work with virtually any (unbiased) sampling mechanism. This is particularly useful in federated learning, as it allows us to construct and better understand the impact of various combinations of client and data sampling strategies. Second, our analysis is sharper as we make explicit use of certain novel inequalities that capture the intricate interplay between the smoothness constants and the sampling procedure. Indeed, our analysis is better even for the simple sampling procedure analyzed in the PAGE paper. However, this already improved bound can be further sharpened by a different sampling scheme which we propose. In summary, we provide the most general and most accurate analysis of optimal SGD in the smooth nonconvex regime. Finally, our theoretical findings are supposed with carefully designed experiments.

1. INTRODUCTION

In this paper, we consider the minimization of the average of n smooth functions (1) in the nonconvex setting in the regime when the number of functions n is very large. In this regime, calculation of the exact gradient can be infeasible and the classical gradient descent method (GD) (Nesterov, 2018) can not be applied. The structure of the problem is generic, and such problems arise in many applications, including machine learning (Bishop & Nasrabadi, 2006) and computer vision (Goodfellow et al., 2016) . Problems of this form are the basis of empirical risk minimization (ERM), which is the prevalent paradigm for training supervised machine learning models.

1.1. FINITE-SUM OPTIMIZATION IN THE SMOOTH NONCONVEX REGIME

We consider the finite-sum optimization problem min x∈R d f (x) := 1 n n i=1 f i (x) , where f i : R d → R is a smooth (and possibly nonconvex) function for all i ∈ [n] := {1, . . . , n}. We are interested in randomized algorithms that find an ε-stationary point of (1) by returning a random point x such that E ∇f ( x) 2 ≤ ε. The main efficiency metric of gradient-based algorithms for finding such a point is the (expected) number of gradient evaluations ∇f i ; we will refer to it as the complexity of an algorithm.

1.2. RELATED WORK

The area of algorithmic research devoted to designing methods for solving the ERM problem (1) in the smooth nonconvex regime is one of the most highly developed and most competitive in optimization. The path to optimality. Let us provide a lightning-speed overview of recent progress. The complexity of GD for solving (1) is O nε -1 , but this was subsequently improved by more elaborate stochastic methods, including SAGA, SVRG and SCSG (Defazio et al., 2014; Johnson & Zhang, 2013; Lei et al., 2017; Horváth & Richtárik, 2019) , which enjoy the better complexity O n + n 2/3 ε -1 . Further progress was obtained by methods such as SNVRG and Geom-SARAH (Zhou et al., 2018; Horváth et al., 2020) , improving the complexity to O n + n 1/2 ε -1 . Finally, the methods SPIDER, SpiderBoost, SARAH and PAGE (Fang et al., 2018; Wang et al., 2019; Nguyen et al., 2017; Li et al., 2021) , among others, shaved-off certain logarithmic factors and obtained the optimal complexity O n + n 1/2 ε -1 , matching lower bounds (Li et al., 2021) . Optimal, but hiding a secret. While it may look that this is the end of the road, the starting point of our work is the observation that the big-O notation in the above results hides important and typically very large data-dependent constants. For instance, it is rarely noted that the more precise complexity of GD is O L -nε -1 , while the complexity of the optimal methods, for instance PAGE, et al., 2021) . We thus believe that an in-depth study of sampling mechanisms for optimal methods will be of interest to the federated learning community. There exists prior work on analyzing non-optimal SGD variants with flexible mechanisms For example, using the "arbitrary sampling" paradigm, originally proposed by Richtárik & Takáč (2016) in the study of randomized coordinate descent methods, Horváth & Richtárik (2019) and Qian et al. ( 2021) analyzed SVRG, SAGA, and SARAH methods, and showed that it is possible to improve the dependence of these methods on the smoothness constants via carefully crafted sampling strategies. Further, Zhao & Zhang (2014) investigated the stratified sampling, but only provided the analysis for vanilla SGD, and in the convex case. is O n + L + n 1/2 ε -

1.3. SUMMARY OF CONTRIBUTIONS

• Specifically, in the original paper (Li et al., 2021) , the optimal (w.r.t. n and ε) optimization method PAGE was analyzed with a simple uniform mini-batch sampling with replacement. We analyze PAGE with virtually any (unbiased) sampling mechanism using a novel Assumption 4. Moreover, we show that some samplings can improve the convergence rate O n + L + n 1/2 ε -1 of PAGE (see Table 2 ). • We improve the analysis of PAGE using a new quantity, the weighted Hessian Variance L ± (or L ±,w ), that is well-defined if the functions f i are L i -smooth. We show that, when the functions f i are "similar" in the sense of the weighted Hessian Variance, PAGE enjoys faster convergence rates (see Table 2 ). Also, unlike (Szlendak et al., 2021), we introduce weights w i that can play a crucial role in some samplings. Moreover, the experiments in Sec 5 agree with our theoretical results. • Our framework is flexible and can be generalized to the composition of samplings. These samplings naturally emerge in federated learning (Konečný et al., 2016; McMahan et al., 2017) , and we show that our framework can be helpful in the analysis of problems from federated learning.

2. ASSUMPTIONS

We need the following standard assumptions from nonconvex optimization. Assumption 1. There exists f * ∈ R such that f (x) ≥ f * for all x ∈ R d . Assumption 2. There exists L -≥ 0 such that ∇f (x) -∇f (y) ≤ L -x -y for all x, y ∈ R d .

