ON THE EFFECT OF CONSENSUS IN DECENTRALIZED DEEP LEARNING Anonymous authors Paper under double-blind review

Abstract

Decentralized training of deep learning models enables on-device learning over networks, as well as efficient scaling to large compute clusters. Experiments in earlier works revealed that decentralized training often suffers from generalization issues: the performance of models trained in a decentralized fashion is in general worse than the performance of models trained in a centralized fashion, and this generalization gap is impacted by parameters such as network size, communication topology and data partitioning. We identify the changing consensus distance between devices as a key parameter to explain the gap between centralized and decentralized training. We show that when the consensus distance does not grow too large, the performance of centralized training can be reached and sometimes surpassed. We highlight the intimate interplay between network topology and learning rate at the different training phases and discuss the implications for communication efficient training schemes. Our insights into the generalization gap in decentralized deep learning allow the principled design of better training schemes that mitigate these effects.

1. INTRODUCTION

Highly over-parametrized deep neural networks show impressive results in machine learning tasks, which also lead to a dramatic increase in the size, complexity, and computational power of the training systems. In response to these challenges, distributed training algorithms (i.e. data-parallel large mini-batch SGD) have been developed for use in data-center (Goyal et al., 2017; You et al., 2018; Shallue et al., 2018) . These SOTA training systems in general use the All-Reduce communication primitive to perform exact averaging on the local mini-batch gradients computed on different subsets of the data, for the later synchronized model update. However, exact averaging with All-Reduce is sensitive to the communication hardware of the training systems, causing the bottleneck in efficient deep learning training. To this end, decentralized training has become an indispensable training paradigm for efficient large scale training in the data-center, alongside its orthogonal benefits on preserving users' privacy for edge AI (Bellet et al., 2018; Kairouz et al., 2019) . In this work, we theoretically identify the consensus distance, i.e. the average discrepancy between the nodes, as the key parameter that captures the joint effect of decentralization. We show that there exists a critical consensus distance: when the consensus distance is lower than this critical value, the optimization is almost unhindered. With the insight derived from optimization convergence, we further question the existence of the critical consensus distance for deep learning in terms of generalization, and identify the training phase when the critical consensus distance matters. We believe that the answers to these questions are valuable in practice, as they offer the possibility to design training strategies that strike appropriate trade-off between targeted generalization performance and affordable communication resources. • We show that tracking the consensus distance over the training phases can explain the generalization gap between centralized and decentralized training. • We show theoretically that when the consensus distance does not exceed a critical value, then decentralization exerts negligible impact to the optimization. We argue how this critical value depends crucially on the learning rate and the mixing matrix. 

2. RELATED WORK

Gossip averaging (Kempe et al., 2003; Xiao & Boyd, 2004; Boyd et al., 2006) forms the backbone of many decentralized learning algorithms. The convergence rate of gossip averaging towards consensus among the nodes can be expressed in terms of the (expected) spectral gap of the mixing matrix. Lian et al. ( 2017) combine SGD with gossip averaging for deep learning and show that the leading term in the convergence rate O 1 nε 2 is consistent with the convergence of the centralized mini-batch SGD (Dekel et al., 2012) and the spectral gap only affects the asymptotically smaller terms. Similar results have been observed very recently for related schemes (Scaman et al., 2017; 2018; Koloskova et al., 2019; 2020a; b) . As the communication topology also impacts the cost per round (number of peer-to-peer communications), sparse topologies have been proposed and studied recently (Assran et al., 2019; Wang et al., 2019; Nadiradze et al., 2020) . Whilst a few recent works focus on the impact of the topology on the optimization performance (Luo et al., 2019; Neglia et al., 2020) , we here identify the consensus distance as a more canonical parameter that can characterize the overall effect of decentralized learning, beyond only the topology, through which we are able to provide deeper understanding on the more fine-grained impact of the evolution of the actual consensus distance, on the generalization performance of deep learning training. Prior work identified the consensus distance as an important parameter that can affect optimization performance and convergence, and provide approaches to increase consensus: for instance Scaman et al. 

3. THEORETICAL UNDERSTANDING

In this section we consider decentralized training with stochastic gradient descent (D-SGD) without momentum, but we are using the momentum version in all our DL experiments.



(2017); Sharma et al. (2019) perform multiple consensus steps per round, and Tsitsiklis (1984); Nedić & Ozdaglar (2009); Duchi et al. (2012); Yuan et al. (2016) choose carefully tuned learning rates. However, these work do not provide insights into how consensus distance at different training phases impacts the decentralized training, which is the main target of this work.

• Through the lens of consensus distance, we empirically investigate what is the desirable level of consensus distance during different phases of training, in order to ensure high generalization performance. Our extensive experiments on Computer Vision (CV) tasks (CIFAR-10 and ImageNet-32) as well as Natural Language Processing (NLP) tasks (transformer models for machine translation) do confirm that a critical consensus distance indeed exists, and that consensus distances higher than this critical value heavily penalize the final generalization performance. Surprisingly, we find that large consensus distance might be beneficial in later training phases after optimization has plateaued, leading to improved generalization, which is consistent with the observations in centralizedPost-Local SGD (Lin et al., 2020b).• Based on our findings, we propose practical guidelines on how to achieve favorable generalization performance with low communication expenses, on arbitrary communication networks.

