SIMPLIFIED STATE SPACE LAYERS FOR SEQUENCE MODELING

Abstract

Models using structured state space sequence (S4) layers have achieved state-ofthe-art performance on long-range sequence modeling tasks. An S4 layer combines linear state space models (SSMs), the HiPPO framework, and deep learning to achieve high performance. We build on the design of the S4 layer and introduce a new state space layer, the S5 layer. Whereas an S4 layer uses many independent single-input, single-output SSMs, the S5 layer uses one multi-input, multi-output SSM. We establish a connection between S5 and S4, and use this to develop the initialization and parameterization used by the S5 model. The result is a state space layer that can leverage efficient and widely implemented parallel scans, allowing S5 to match the computational efficiency of S4, while also achieving state-of-the-art performance on several long-range sequence modeling tasks. S5 averages 87.4% on the long range arena benchmark, and 98.5% on the most difficult Path-X task.

1. INTRODUCTION

Efficiently modeling long sequences is a challenging problem in machine learning. Information crucial to solving tasks may be encoded jointly between observations that are thousands of timesteps apart. Specialized variants of recurrent neural networks (RNNs) (Arjovsky et al., 2016; Erichson et al., 2021; Rusch & Mishra, 2021; Chang et al., 2019) , convolutional neural networks (CNNs) (Bai et al., 2018; Oord et al., 2016; Romero et al., 2022b), and transformers (Vaswani et al., 2017) have been developed to try to address this problem. In particular, many efficient transformer methods have been introduced (Choromanski et al., 2021; Katharopoulos et al., 2020; Kitaev et al., 2020; Beltagy et al., 2020; Gupta & Berant, 2020; Wang et al., 2020) to address the standard transformer's quadratic complexity in the sequence length. However, these more efficient transformers still perform poorly on very long-range sequence tasks (Tay et al., 2021) . Gu et al. (2021a) presented an alternative approach using structured state space sequence (S4) layers. An S4 layer defines a nonlinear sequence-to-sequence transformation via a bank of many independent single-input, single-output (SISO) linear state space models (SSMs) (Gu et al., 2021b) , coupled together with nonlinear mixing layers. Each SSM leverages the HiPPO framework (Gu et al., 2020a) by initializing with specially constructed state matrices. Since the SSMs are linear, each layer can be equivalently implemented as a convolution, which can then be applied efficiently by parallelizing across the sequence length. Multiple S4 layers can be stacked to create a deep sequence model. Such models have achieved significant improvements over previous methods, including on the long range arena (LRA) (Tay et al., 2021) benchmarks specifically designed to stress test long-range sequence models. Extensions have shown good performance on raw audio generation (Goel et al., 2022) and classification of long movie clips (Islam & Bertasius, 2022) . We introduce a new state space layer that builds on the S4 layer, the S5 layer, illustrated in Figure 1 . S5 streamlines the S4 layer in two main ways. First, S5 uses one multi-input, multi-output (MIMO) SSM in place of the bank of many independent SISO SSMs in S4. Second, S5 uses an efficient and widely implemented parallel scan. This removes the need for the convolutional and frequencydomain approach used by S4, which requires a non-trivial computation of the convolution kernel. The resulting state space layer has the same computational complexity as S4, but operates purely recurrently and in the time domain. We then establish a mathematical relationship between S4 and S5. This connection allows us to inherit the HiPPO initialization schemes that are key to the success of S4. Unfortunately, the specific HiPPO matrix that S4 uses for initialization cannot be diagonalized in a numerically stable manner for use in S5. However, in line with recent work on the DSS (Gupta et al., 2022) and S4D (Gu et al., 2022) layers, we found that a diagonal approximation to the HiPPO matrix achieves comparable performance. We extend a result from Gu et al. ( 2022) to the MIMO setting, which justifies the diagonal approximation for use in S5. We leverage the mathematical relationship between S4 and S5 to inform several other aspects of parameterization and initialization, and we perform thorough ablation studies to explore these design choices. The final S5 layer has many desirable properties. It is straightforward to implement (see Appendix A),foot_0 enjoys linear complexity in the sequence length, and can efficiently handle time-varying SSMs and irregularly sampled observations (which is intractable with the convolution implementation of S4). S5 achieves state-of-the-art performance on a variety of long-range sequence modeling tasks, with an LRA average of 87.4%, and 98.5% accuracy on the most difficult Path-X task.

2. BACKGROUND

We provide the necessary background in this section prior to introducing the S5 layer in Section 3.

2.1. LINEAR STATE SPACE MODELS

Continuous-time linear SSMs are the core component of both the S4 layer and the S5 layer. Given an input signal u(t) ∈ R U , a latent state x(t) ∈ R P and an output signal y(t) ∈ R M , a linear continuous-time SSM is defined by the differential equation: dx(t) dt = Ax(t) + Bu(t), y(t) = Cx(t) + Du(t), and is parameterized by a state matrix A ∈ R P ×P , an input matrix B ∈ R P ×U , an output matrix C ∈ R M ×P and a feedthrough matrix D ∈ R M ×U . For a constant step size, ∆, the SSM can be discretized using, e.g. Euler, bilinear or zero-order hold (ZOH) methods to define the linear recurrence x k = Ax k-1 + Bu k , y k = Cx k + Du k , (2) where the discrete-time parameters are each a function, specified by the discretization method, of the continuous-time parameters. See Iserles (2009) for more information on discretization methods. 



The full S5 implementation is available at: https://github.com/lindermanlab/S5.



Figure1: The computational components of an S5 layer for offline application to a sequence. The S5 layer uses a parallel scan on a diagonalized linear SSM to compute the SSM outputs y 1:L ∈ R L×H . A nonlinear activation function is applied to the SSM outputs to produce the layer outputs. A similar diagram for S4 is included in Appendix B.

We use parallel scans to efficiently compute the states of a discretized linear SSM. Given a binary associative operator• (i.e. (a • b) • c = a • (b • c))and a sequence of L elements [a 1 , a 2 , ..., a L ], the

