CRITICAL BATCH SIZE MINIMIZES STOCHASTIC FIRST-ORDER ORACLE COMPLEXITY OF DEEP LEARNING OPTIMIZER USING HYPERPARAMETERS CLOSE TO ONE Anonymous authors Paper under double-blind review

Abstract

Practical results have shown that deep learning optimizers using small constant learning rates, hyperparameters close to one, and large batch sizes can find the model parameters of deep neural networks that minimize the loss functions. We first show theoretical evidence that the momentum method (Momentum) and adaptive moment estimation (Adam) perform well in the sense that the upper bound of the theoretical performance measure is small with a small constant learning rate, hyperparameters close to one, and a large batch size. Next, we show that there exists a batch size called the critical batch size minimizing the stochastic first-order oracle (SFO) complexity, which is the stochastic gradient computation cost, and that SFO complexity increases once the batch size exceeds the critical batch size. Finally, we provide numerical results that support our theoretical results. That is, the numerical results indicate that Adam using a small constant learning rate, hyperparameters close to one, and the critical batch size minimizing SFO complexity has faster convergence than Momentum and stochastic gradient descent (SGD).

1. INTRODUCTION

1.1 BACKGROUND Useful deep learning optimizers have been proposed to find the model parameters of the deep neural networks that minimize loss functions called the expected risk and empirical risk, such as stochastic gradient descent (SGD) (Robbins & Monro, 1951; Zinkevich, 2003; Nemirovski et al., 2009; Ghadimi & Lan, 2012; 2013) , momentum methods (Polyak, 1964; Nesterov, 1983) , and adaptive methods. The various adaptive methods include Adaptive Gradient (AdaGrad) (Duchi et al., 2011) , Root Mean Square Propagation (RMSProp) (Tieleman & Hinton, 2012) , Adaptive Moment Estimation (Adam) (Kingma & Ba, 2015) , Adaptive Mean Square Gradient (AMSGrad) (Reddi et al., 2018 ), Yogi (Zaheer et al., 2018) , Adam with decoupled weight decay (AdamW) (Loshchilov & Hutter, 2019) , and AdaBelief (named for adapting stepsizes by the belief in observed gradients) (Zhuang et al., 2020) . Theoretical analyses of adaptive methods for nonconvex optimization were presented in (Zaheer et al., 2018; Zou et al., 2019; Chen et al., 2019; Zhou et al., 2020; Zhuang et al., 2020; Chen et al., 2021) (see (Jain et al., 2018; Fehrman et al., 2020; Chen et al., 2020; Scaman & Malherbe, 2020; Loizou et al., 2021) for convergence analyses of SGD). A particularly interesting feature of adaptive methods is the use of hyperparameters, denoted by β 1 and β 2 , that can be set to influence the method performance P(K) : = 1 K K k=1 E[∥∇f (θ k )∥ 2 ], where ∇f is the gradient of a loss function f : R d → R, (θ k ) K k=1 is the sequence generated by an optimizer, and K is the number of steps. The previous results are summarized in Table 1 indicating that using β 1 and/or β 2 close to 0 makes the upper bound of P(K) small (see also Appendix A.1). Meanwhile, practical results for adaptive methods were presented in (Kingma & Ba, 2015; Reddi et al., 2018; Zaheer et al., 2018; Zou et al., 2019; Chen et al., 2019; Zhuang et al., 2020; Chen et al., 2021) . These studies have shown that using, for example, β 1 ∈ {0.9, 0.99} and β 2 ∈ {0.99, 0.999} provides superior performance for training deep neural networks. The practically useful β 1 and β 2 are each close to 1, whereas in contrast, the theoretical results (Table 1 ) show that using β 1 and/or β 2 close to 0 makes the upper bounds of the performance measures small. Table 1 : Upper bounds of performance measure of optimizers with learning rate α k and hyperparameters β 1 and β 2 (G > 0, s ∈ (0, 1/2), L denotes the Lipschitz constant of the Lipschitz continuous gradient of the loss function f , K denotes the number of steps, b is the batch size, α b,max depends on b and the largest eigenvalue of the Hessian of f , h is a monotone decreasing function with respect to β 1 , and C 3 is defined as in Table 2 . β ≈ a implies that, if β is close to a, then the upper bounds are small.) Optimizer Learning Rate α k Parameters β 1 , β 2 Upper Bound Tail-averaged SGD O(α b,max ) β 1 = 0 O 1 K 2 + 1 Kb (Jain et al., 2018) β 2 = 0 Adam O 1 L β 1 = 0 O 1 K + 1 b (Zaheer et al., 2018) β 2 ≥ 1 -O 1 G 2 Generic Adam O 1 √ k β 1 ≈ 0, O log K √ K (Zou et al., 2019) β 2 = 1 - 1 k ≈ 1 AdaFom 1 √ k β 1 ≈ 0 O log K √ K (Chen et al., 2019) AMSGrad α 0 ≈ β 1 < √ β 2 O 1 K 1 2 -s (Zhou et al., 2020) AdaBelief O 1 √ k β 1 ≈ 0, β 2 ≈ 0 O log K √ K (Zhuang et al., 2020) Padam α β 1 ≈ 0, β 2 ≈ 0 O 1 K 1 2 -s (Chen et al., 2021) Adam, AMSGrad α β 1 ≈ 1, β 2 ≈ 1 O 1 K + 1 b + C 3 (this paper) varying α k β 1 ≈ 1, β 2 ≈ 1 O 1 K + 1 Kb + h(β 1 ) The practical performance of a deep learning optimizer strongly depends on the batch size. In (Smith et al., 2018) , it was numerically shown that using an enormous batch size leads to a reduction in the number of parameter updates and model training time. The theoretical results in (Zaheer et al., 2018) showed that using large batch sizes makes the upper bound of P(K) of an adaptive method small (Table 1 ). Convergence analyses of SGD in (Cotter et al., 2011; Chen et al., 2020; Arjevani et al., 2022) indicated that running SGD with a decaying learning rate and large batch size for sufficiently many steps leads to convergence to a local minimizer of a loss function. Accordingly, the practical results for large batch sizes match the theoretical ones. In (Shallue et al., 2019; Zhang et al., 2019) , it was studied how increasing the batch size affects the performances of deep learning optimizers. In both studies, it was numerically shown that increasing batch size tends to decrease the number of steps K needed for training deep neural networks, but with diminishing returns. Moreover, it was shown that momentum methods can exploit larger

