CONTINUAL TRANSFORMERS: REDUNDANCY-FREE ATTENTION FOR ONLINE INFERENCE

Abstract

Transformers in their common form are inherently limited to operate on whole token sequences rather than on one token at a time. Consequently, their use during online inference on time-series data entails considerable redundancy due to the overlap in successive token sequences. In this work, we propose novel formulations of the Scaled Dot-Product Attention, which enable Transformers to perform efficient online token-by-token inference on a continual input stream. Importantly, our modifications are purely to the order of computations, while the outputs and learned weights are identical to those of the original Transformer Encoder. We validate our Continual Transformer Encoder with experiments on the THUMOS14, TVSeries and GTZAN datasets with remarkable results: Our Continual one-and two-block architectures reduce the floating point operations per prediction by up to 63× and 2.6×, respectively, while retaining predictive performance.

1. INTRODUCTION

Many real-life usage scenarios such as the perception in self-driving cars and live monitoring of critical resources process a continual stream of inputs and require near-instantaneous predictions per time-step. This stands in contrast to what many common benchmarks for deep learning evaluate, namely the operation on distinct batches of data with no inter-batch relationships. Consequently, a plethora of methods have been developed (Ji et al., 2013; Carreira & Zisserman, 2017; Varol et al., 2018; Yan et al., 2018; Heidari & Iosifidis, 2021; Vaswani et al., 2017; Arnab et al., 2021; Bakhtiarnia et al., 2021b) , which focus on batch-wise processing, but fail to optimise for online operation, where new information (e.g., a video frame / token) arrives at each step from a continual input stream, and future information is not available at the current time-step. We need a class of networks, which operate efficiently on both batches of data and on continual streams. Accordingly, we propose a reformulation of the Transformer Encoder as a Continual Inference Network (CIN, Section 2.1) which accelerates the stream processing on time-series data, while retaining weight-compatibility. Specifically, we derive two variants of Continual Scaled Dot-Product Attention (SDA) for the cases where prior output tokes should and should not be updated after observing a new input token. Notably, our attention formulations reduce the per-step cost of SDA (Vaswani et al., 2017) • is capable of continual step inference without computational redundancy, • is capable of batch inference corresponding to a non-continual Neural Network, • produces identical outputs for batch-and step inference given identical receptive fields, • uses one set of trainable parameters for both batch and step inference. These requirements ensure that a Neural Network has broad applicability for both (offline) batchwise inference (i.e., most research benchmarks) and online stream processing. While non-CINs can operate on streams of data by caching prior steps in a first-in first-out (FIFO) queue and aggregating them to a full (spatio-)temporal input, which is processed similarly to an offline batch, this entails computational redundancy in proportion with the sequence length. CINs perform step-wise inference without such caching and repeat computation. Uni-directional Recurrent Neural Networks are an example of Continual Inference Networks. Their default mode of operation is by time-step and they are easily applied to spatio-temporal batches of data by concatenation of the step-wise outputs. Recently, a modification to the spatio-temporal 3D convolution was proposed (Hedegaard & Iosifidis, 2021), which enables existing 3D CNNs to operate efficiently during continual inference. A similar principle was used to enhance Spatio-temporal Graph Convolutions as well (Hedegaard et al., 2022) . In Section 3, we derive a CIN formulation for Transformer Encoders.

2.2. TRANSFORMER ARCHITECTURES

Initially proposed for sequence-to-sequence modelling in Natural Language Processing, the Transformer (Vaswani et al., 2017) has become a canonical building block in many applications of Deep Learning, including Computer Vision (Dosovitskiy et al., 2021; Arnab et al., 2021; Wang et al., 2021; Carion et al., 2020) and Audio Classification (Gong et al., 2021) . Their success can be partly attributed to reduced inductive bias compared with CNNs and RNNs, which allows better adaptations when sufficiently large datasets are available; the Scaled Dot-Product Attention (SDA) maps a set of input tokens to a set of outputs without inherent preconceptions. However, this many-to-many attention exhibits quadratic growth in time and space complexity with the token count n in the set. A great deal of research has sought to improve the efficiency of Transformers (Tay et al., 2020 ), where n b < n is the number of blocks. Techniques such as sliding windows, dilation and pooling can be used to achieve a similar effect (Beltagy et al., 2020) . The Reformer (Kitaev et al., 2020) reduces the complexity to O(n log n) by learning group-



Source code: https://github.com/lukashedegaard/continual-transformers.



from time complexity O(n 2 d) to O(nd) and memory complexity O(n 2 ) to O(nd) and are readily embedded into Continual Multi-Head Attention (MHA) and Continual Transformer Encoder blocks. Finally, we propose the use of Recycling Positional Encoding to accommodate progressive caching of partial attention results for continual data streams. Due to the interdependence of SDA outputs, Continual Transformers are most efficient for shallow architectures. Shallow Transformers have many applications such as augmentations of CNNs (Touvron et al., 2021), light-weight Natural Language Processing(Cornia et al., 2020), fusion operations in multi-modal (e.g. audio-visual) settings(Chumachenko et al., 2022)  and early exit branches in multi-exit architectures(Bakhtiarnia et al., 2021a;b). In our experiments 1 , we validate their exceptional efficiency improvements on common benchmarks in Online Action Detection(Idrees et al.,  2017)  and Online Audio Classification(Tzanetakis et al., 2001).

Figure 1: Multi-block Continual Transformer Encoder with Recycling Positional Encoding. For b > 2 blocks, regular Transformer Encoder blocks can be added between an initial Continual Retroactive block and a final Single-Output block. A class-token may be used after the initial block.

). Block-wise or Chunking methods such as Image Transformer (Parmar et al., 2018) and Vision Transformer (Dosovitskiy et al., 2021) group up entities of a local receptive field into a single block, reducing the O(n 2 ) complexity to O(n 2 b

