PARALLEL TRAINING OF DEEP NETWORKS WITH LOCAL UPDATES

Abstract

Deep learning models trained on large data sets have been widely successful in both vision and language domains. As state-of-the-art deep learning architectures have continued to grow in parameter count so have the compute budgets and times required to train them, increasing the need for compute-efficient methods that parallelize training. Two common approaches to parallelize the training of deep networks have been data and model parallelism. While useful, data and model parallelism suffer from diminishing returns in terms of compute efficiency for large batch sizes. In this paper, we investigate how to continue scaling compute efficiently beyond the point of diminishing returns for large batches through local parallelism, a framework which parallelizes training of individual layers in deep networks by replacing global backpropagation with truncated layer-wise backpropagation. Local parallelism enables fully asynchronous layer-wise parallelism with a low memory footprint, and requires little communication overhead compared with model parallelism. We show results in both vision and language domains across a diverse set of architectures, and find that local parallelism is particularly effective in the high-compute regime.

1. INTRODUCTION

Backpropagation (Rumelhart et al., 1985) is by far the most common method used to train neural networks. Alternatives to backpropagation are typically used only when backpropagation is impractical due to a non-differentiable loss (Schulman et al., 2015) , non-smooth loss landscape (Metz et al., 2019) , or due to memory and/or compute requirements (Ororbia et al., 2020) . However, progress in deep learning is producing ever larger models in terms of parameter count and depth, in vision (Hénaff et al., 2019; Chen et al., 2020 ), language (Radford et al., 2019; Brown et al., 2020) , and many other domains (Silver et al., 2017; Vinyals et al., 2019; Berner et al., 2019) . As model size increases, backpropagation incurs growing computational, memory, and synchronization overhead (Ben-Nun & Hoefler, 2018) . This raises the question of whether there are more efficient training strategies, even for models and losses that are considered well matched to training by backpropagation. Much of the work on training large scale models focuses on designing compute infrastructure which makes backpropagation more efficient, despite growing model size (Dean et al., 2012b; Chen et al., 2015; Sergeev & Balso, 2018) . One of the most common ways to achieve efficient training of deep neural networks with backpropagation is to scale utilizing data parallelism (Zhang et al., 1989; Chen et al., 2016) , training on bigger batch sizes spread across multiple devices. However, diminishing returns have been reported with this method for larger batch sizes, effectively wasting compute (Goyal et al., 2017; Masters & Luschi, 2018; Shallue et al., 2018; McCandlish et al., 2018) . Training based on pipeline parallelism has also been introduced, but still requires large batches for efficient training (Petrowski et al., 1993; Ben-Nun & Hoefler, 2018; Huang et al., 2019) . Moreover, in addition to the limitation that in the forward pass each layer can only process the input data in sequence (forward locking), the use of backpropagation implies that the network parameters of each layer can only be updated in turn after completing the full forward pass (backward locking). This backward locking results in increased memory overhead, and precludes efficient parallel processing across layers (Jaderberg et al., 2017) . The challenges of scaling compute infrastructure to support deep networks trained with backpropagation motivate the need for alternative approaches to training deep neural networks. In this work, we explore how layer-wise local updates (Belilovsky et al., 2019a; Löwe et al., 2019; Xiong et al., 2020) can help overcome these challenges and scale more efficiently with compute than backpropagation. With local updates, each layer is updated before even completing a full forward pass through the network. This remedies the forward and backward locking problems which harm memory efficiency and update latency in standard backprop. Layer-wise local updates are not proportional to gradients of the original loss, and are not even guaranteed to descend a loss function. Nevertheless, in practice they are effective at training neural networks. We refer to this approach of parallelizing compute, which is alternative and complementary to data and model parallelism, as local parallelism. Our investigation focuses on the trade-offs of using local update methods as opposed to global backpropagation. To summarize our contributions: (i) We provide the first large scale investigation into local update methods in both vision and language domains. We find training speedups (as measured by the reduction in required sequential compute steps) of up to 10× on simple MLPs, and 2× on Transformer architectures. These training speedups are the result of local training methods being able to leverage more parallel compute than backprop. (ii) We provide insight into how local parallelism methods work, and experimentally compare the similarity of their gradient and features to those from backprop. (iii) We demonstrate a prototype implementation of local parallelism for ResNets, and show up to a 40% increase in sample throughput (number of training points per second) relative to backprop, due to higher hardware utilization. We believe that local parallelism will provide benefits whenever there are diminishing returns from data parallelism, and avoid stale weights from pipelined model parallelism. Additionally, we have released code showing an example of local parallelism, available at hiddenurl. 



Figure 1: Parallelization in deep learning -(a) data, (b) model, (c) pipeline and (d) local parallelism. While data, model, and pipeline parallelism are existing paradigms for parallelizing learning, we investigate another way of parallelizing learning through local layer-wise training shown in (d).

PARALLELIZATION IN DEEP LEARNING Scaling large models has led to the development of a number of techniques to train deep models in a parallel fashion (Ben-Nun & Hoefler, 2018), summarized in Figure 1. Data Parallelism: Data Parallelism (Zhang et al., 1989) is an attempt to speed up training of a model by splitting the data among multiple identical models and training each model on a shard of the data independently. Data parallelism is effectively training with larger minibatches(Kaplan et al.,  2020). This creates issues around the consistency of a model which then needs to be synchronized(Deng et al., 2012; Dean et al., 2012a). There are two main ways to synchronize weights across model copies: (i) Synchronous optimization, where data parallel training synchronizes at the end of every minibatch(Das et al., 2016; Chen et al., 2016), with a communication overhead that increases with the number of devices; (ii) Asynchronous optimization that implements data parallel training with independent updates of local model parameters without global synchronization(Niu et al.,  2011; Dean et al., 2012a)  -this increases device utilization, but empirically gradients are computed on stale weights, which results in a poor sample efficiency and thus slower overall training time compared to synchronous optimization.

