EFFICIENT RECURRENT ARCHITECTURES THROUGH ACTIVITY SPARSITY AND SPARSE BACK-PROPAGATION THROUGH TIME

Abstract

Recurrent neural networks (RNNs) are well suited for solving sequence tasks in resource-constrained systems due to their expressivity and low computational requirements. However, there is still a need to bridge the gap between what RNNs are capable of in terms of efficiency and performance and real-world application requirements. The memory and computational requirements arising from propagating the activations of all the neurons at every time step to every connected neuron, together with the sequential dependence of activations, contribute to the inefficiency of training and using RNNs. We propose a solution inspired by biological neuron dynamics that makes the communication between RNN units sparse and discrete. This makes the backward pass with backpropagation through time (BPTT) computationally sparse and efficient as well. We base our model on the gated recurrent unit (GRU), extending it with units that emit discrete events for communication triggered by a threshold so that no information is communicated to other units in the absence of events. We show theoretically that the communication between units, and hence the computation required for both the forward and backward passes, scales with the number of events in the network. Our model achieves efficiency without compromising task performance, demonstrating competitive performance compared to state-of-the-art recurrent network models in real-world tasks, including language modeling. The dynamic activity sparsity mechanism also makes our model well suited for novel energy-efficient neuromorphic hardware.

1. INTRODUCTION

Large scale models such as GPT-3 (Brown et al., 2020) and DALL-E (Ramesh et al., 2021) have demonstrated that scaling up deep learning models to billions of parameters improve not just their performance but lead to entirely new forms of generalisation. But for resource constrained environments, transformers are impractical due to their computational and memory requirements during training as well as inference. Recurrent neural networks (RNNs) may provide a viable alternative in such low-resource environments, but require further algorithmic and computational optimizations. While it is unknown if scaling up recurrent neural networks can lead to similar forms of generalization, the limitations on scaling them up preclude studying this possibility. The dependence of each time step's computation on the previous time step's output prevents easy parallelisation of the model computation. Moreover, propagating the activations of all the units in each time step is computationally inefficient and leads to high memory requirements when training with backpropagation through time (BPTT). While allowing extraordinary task performance, the biological brain's recurrent architecture is extremely energy efficient (Mead, 2020) . One of the brain's strategies to reach these high levels of efficiency is activity sparsity. In the brain, (asynchronous) event-based and activity-sparse communication results from the properties of the specific physical and biological substrate on which * Work done while at Ruhr University Bochum 1

