STOCHASTIC NORMALIZED GRADIENT DESCENT WITH MOMENTUM FOR LARGE BATCH TRAINING Anonymous authors Paper under double-blind review

Abstract

Stochastic gradient descent (SGD) and its variants have been the dominating optimization methods in machine learning. Compared with small batch training, SGD with large batch training can better utilize the computational power of current multi-core systems like GPUs and can reduce the number of communication rounds in distributed training. Hence, SGD with large batch training has attracted more and more attention. However, existing empirical results show that large batch training typically leads to a drop of generalization accuracy. As a result, large batch training has also become a challenging topic. In this paper, we propose a novel method, called stochastic normalized gradient descent with momentum (SNGM), for large batch training. We theoretically prove that compared to momentum SGD (MSGD) which is one of the most widely used variants of SGD, SNGM can adopt a larger batch size to converge to the -stationary point with the same computation complexity (total number of gradient computation). Empirical results on deep learning also show that SNGM can achieve the state-of-the-art accuracy with a large batch size.

1. INTRODUCTION

In machine learning, we often need to solve the following empirical risk minimization problem: min w∈R d F (w) = 1 n n i=1 f i (w), where w ∈ R d denotes the model parameter, n denotes the number of training samples, f i (w) denotes the loss on the ith training sample. The problem in (1) can be used to formulate a broad family of machine learning models, such as logistic regression and deep learning models. Stochastic gradient descent (SGD) Robbins & Monro (1951) and its variants have been the dominating optimization methods for solving (1). SGD and its variants are iterative methods. In the tth iteration, these methods randomly choose a subset (also called mini-batch) The main contributions of this paper are outlined as follows: I t ⊂ {1, 2, . . . , • We theoretically prove that compared to MSGD which is one of the most widely used variants of SGD, SNGM can adopt a larger batch size to converge to the -stationary point with the same computation complexity (total number of gradient computation). That is to say, SNGM needs a smaller number of parameter update, and hence has faster training speed than MSGD. • For a relaxed smooth objective function (see Definition 2), we theoretically show that SNGM can achieve an -stationary point with a computation complexity of O(1/ 4 ). To the best of our knowledge, this is the first work that analyzes the computation complexity of stochastic optimization methods for a relaxed smooth objective function. • Empirical results on deep learning also show that SNGM can achieve the state-of-the-art accuracy with a large batch size.



Figure 1: The training loss and test accuracy for training a non-convex model (a network with two convolutional layers) on CIFAR10. The optimization method is MSGD with the poly power learning rate strategy.

