DASHA: DISTRIBUTED NONCONVEX OPTIMIZATION WITH COMMUNICATION COMPRESSION AND OPTIMAL ORACLE COMPLEXITY

Abstract

We develop and analyze DASHA: a new family of methods for nonconvex distributed optimization problems. When the local functions at the nodes have a finite-sum or an expectation form, our new methods, DASHA-PAGE, DASHA-MVR and DASHA-SYNC-MVR, improve the theoretical oracle and communication complexity of the previous state-of-the-art method MARINA by Gorbunov et al. (2020). In particular, to achieve an ε-stationary point, and considering the random sparsifier RandK as an example, our methods compute the optimal number of gradients O ( √ m /ε



√ n) and O ( σ /ε 3 /2 n) in finite-sum and expectation form cases, respectively, while maintaining the SOTA communication complexity O ( d /ε √ n). Furthermore, unlike MARINA, the new methods DASHA, DASHA-PAGE and DASHA-MVR send compressed vectors only, which makes them more practical for federated learning. We extend our results to the case when the functions satisfy the Polyak-Łojasiewicz condition. Finally, our theory is corroborated in practice: we see a significant improvement in experiments with nonconvex classification and training of deep learning models.

1. INTRODUCTION

Nonconvex optimization problems are widespread in modern machine learning tasks, especially with the rise of the popularity of deep neural networks (Goodfellow et al., 2016) . In the past years, the dimensionality of such problems has increased because this leads to better quality (Brown et al., 2020) and robustness (Bubeck & Sellke, 2021) of the deep neural networks trained this way. Such huge-dimensional nonconvex problems need special treatment and efficient optimization methods (Danilova et al., 2020) . Because of their high dimensionality, training such models is a computationally intensive undertaking that requires massive training datasets (Hestness et al., 2017) , and parallelization among several compute nodesfoot_0 (Ramesh et al., 2021). Also, the distributed learning paradigm is a necessity in federated learning (Konečný et al., 2016) , where, among other things, there is an explicit desire to secure the private data of each client. Unlike in the case of classical optimization problems, where the performance of algorithms is defined by their computational complexity (Nesterov, 2018) , distributed optimization algorithms are typically measured in terms of the communication overhead between the nodes since such communication is often the bottleneck in practice (Konečný et al., 2016; Wang et al., 2021) . Many approaches tackle the problem, including managing communication delays (Vogels et al., 2021) , fighting with stragglers (Li et al., 2020a) , and optimization over time-varying directed graphs (Nedić & Olshevsky, 2014) . Another popular way to alleviate the communication bottleneck is to use lossy compression of communicated messages (Alistarh et al., 2017; Mishchenko et al., 2019; Gorbunov et al., 2021; Szlendak et al., 2021) . In this paper, we focus on this last approach.



Alternatively, we sometimes use the terms: machines, workers and clients.1

