DECENTRALIZED SGD WITH ASYNCHRONOUS, LOCAL, AND QUANTIZED UPDATES

Abstract

The ability to scale distributed optimization to large node counts has been one of the main enablers of recent progress in machine learning. To this end, several techniques have been explored, such as asynchronous, decentralized, or quantized communication-which significantly reduce the cost of synchronization, and the ability for nodes to perform several local model updates before communicatingwhich reduces the frequency of synchronization. In this paper, we show that these techniques, which have so far been considered independently, can be jointly leveraged to minimize distribution cost for training neural network models via stochastic gradient descent (SGD). We consider a setting with minimal coordination: we have a large number of nodes on a communication graph, each with a local subset of data, performing independent SGD updates onto their local models. After some number of local updates, each node chooses an interaction partner uniformly at random from its neighbors, and averages a possibly quantized version of its local model with the neighbor's model. Our first contribution is in proving that, even under such a relaxed setting, SGD can still be guaranteed to converge under standard assumptions. The proof is based on a new connection with parallel load-balancing processes, and improves existing techniques by jointly handling decentralization, asynchrony, quantization, and local updates, and by bounding their impact. On the practical side, we implement variants of our algorithm and deploy them onto distributed environments, and show that they can successfully converge and scale for large-scale image classification and translation tasks, matching or even slightly improving the accuracy of previous methods.

1. INTRODUCTION

Several techniques have been recently explored for scaling the distributed training of machine learning models, such as communication-reduction, asynchronous updates, or decentralized execution. For background, consider the classical data-parallel distribution strategy for SGD (Bottou, 2010) , with the goal of solving a standard empirical risk minimization problem. Specifically, we have a set of samples S, and wish to minimize the d-dimensional function f : R d → R, which is the average of losses over samples from S, by finding x = argmin x s∈S s (x)/|S|. We have n compute nodes which can process samples in parallel. In data-parallel SGD, each node computes the gradient for one sample, followed by a gradient exchange. Globally, this leads to the iteration: x t+1 = x t -η t n i=1 gi t (x t ), where x t is the value of the global parameter, initially 0 d , η t is the learning rate, and gi t (x t ) is the stochastic gradient with respect to the parameter x t , computed by node i at time t. When executing this procedure at large scale, two major bottlenecks are communication, that is, the number of bits transmitted by each node, and synchronization, i.e., the fact that nodes need to wait for each other in order to progress to the next iteration. Specifically, to maintain a consistent view of the parameter x t above, the nodes need to broadcast and receive all gradients, and need to synchronize globally at the end of every iteration. Significant work has been dedicated to removing these two barriers. In particular, there has been progress on communication-reduced variants of SGD, which propose various gradient compression schemes (Seide et al., 2014; Strom, 2015; Alistarh et al., 2017; Wen et al., 2017; Aji and Heafield, 2017; Dryden et al., 2016; Grubic et al., 2018; Davies et al., 2020) , asynchronous variants, which relax the strict iteration-by-iteration synchronization (Recht et al., 2011; Sa et al., 2015; Duchi et al., 2015) , as well as large-batch or periodic model averaging methods, which aim to reduce the frequency of communication (Goyal et al., 2017; You et al., 2017) and (Chen and Huo, 2016; Stich, 2018) , or decentralized variants, which allow each node to maintain its own, possibly inconsistent, model variant (Lian et al., 2017; Tang et al., 2018; Koloskova et al., 2019) . (We refer the reader to the recent surveys of (Ben-Nun and Hoefler, 2019; Liu and Zhang, 2020) for a detailed discussion.) Using such techniques, it is possible to scale SGD, even for complex objectives such as the training of deep neural networks. However, for modern large-scale models, the communication and synchronization requirements of these parallel variants of SGD can still be burdensome. Contribution. In this paper, we take a further step towards removing these scalability barriers, showing that all the previous scaling techniques-decentralization, quantization, asynchrony, and local steps-can in fact be used in conjunction. We consider a highly decoupled setting with n compute agents, located at vertices of a connected communication graph, each of which can execute sequential SGD on its own local model, based on a fraction of the data. Periodically, after some number of local optimization steps, a node can initiate a pairwise interaction with a uniform random neighbor. Our main finding is that this procedure can converge even though the nodes can take several local steps between interactions, may perform asynchronous communication, reading stale versions of each others' models, and may compress data transmission through quantization. However, both in theory and practice, we observe trade-offs between convergence rate and degree of synchronization, in that the algorithm may need to perform additional gradient steps in order to attain a good solution, relative to the sequential baseline. Our algorithm, called SWARMSGD, is decentralized in sense that each node maintains local version of the model, and two interacting nodes only see each others' models. We further allow that the data distribution at the nodes may not be i.i.d. Specifically, each node i is assigned a set of samples S i , and maintains its own parameter estimate x i . Each node i performs local SGD steps on its model x i based on its local data, and then picks a neighbor uniformly at random to share information with, by averaging of the two models. (To streamline the exposition, we ignore quantization and model staleness unless otherwise specified.) Effectively, if node i interacts with node j, node i's updated model becomes x i t+1 ← x i t,Hi + x j t,Hj 2 , ( ) where t is the total number of interactions performed by all nodes up to this point, j is the interaction partner of i at step t + 1, and the input models x i t,Hi and x j t,Hj have been obtained by iterating the SGD step H i and H j times, respectively, locally from the previous interaction of either node. We assume that H i and H j are random variables with mean H, that is, each node performs H local steps in expectation between two communication steps. The update for node j is symmetric, so that the two models match after the averaging step. In this paper, we analyze variants of the above SwarmSGD protocol. The main intuition behind the algorithm is that the independent SGD steps will allow nodes to explore local improvements to the objective function on their subset of the data, while the averaging steps provide a decentralized way for the models to converge jointly, albeit in a loosely coupled way. We show that, as long as the maximum number of local steps is bounded, this procedure still converges, in the sense that gradients calculated at the average over all models are vanishing as we increase the number of interactions. Specifically, assuming that the n nodes each take a constant number of local SGD steps on average before communicating, we show that SwarmSGD has Θ( √ n) speedup to convergence in the nonconvex case. This matches results from previous work which considered decentralized dynamics but which synchronized upon every SGD step, e.g. (Lian et al., 2017; 2018) . Our analysis also extends to arbitrary regular graph topologies, non-blocking (delayed) averaging of iterates, and quantization. Generally, we show that the impact of decentralization, asynchrony, quantization, and local updates can be asymptotically negligible in reasonable parameter regimes. On the practical side, we show that this algorithm can be mapped to a distributed system setting, where agents correspond to compute nodes, connected by a dense communication topology. Specifically, we apply SwarmSGD to train deep neural networks on image classification and machine translation (NMT) tasks, deployed on the Piz Daint supercomputer (Piz, 2019) . Experiments confirm the intuition that the average synchronization cost of SwarmSGD per iteration is low: it stays around 10% or less of the batch computation time, and remains constant as we increase the number

