LEARNING FAST AND SLOW FOR ONLINE TIME SERIES FORECASTING

Abstract

Despite the recent success of deep learning for time series forecasting, these methods are not scalable for many real-world applications where data arrives sequentially. Training deep neural forecasters on the fly is notoriously challenging because of their limited ability to adapt to non-stationary environments and remember old knowledge. We argue that the fast adaptation capability of deep neural networks is critical and successful solutions require handling changes to both new and recurring patterns effectively. In this work, inspired by the Complementary Learning Systems (CLS) theory, we propose Fast and Slow learning Network (FSNet) as a novel framework to address the challenges of online forecasting. Particularly, FSNet improves the slowly-learned backbone by dynamically balancing fast adaptation to recent changes and retrieving similar old knowledge. FSNet achieves this mechanism via an interaction between two novel complementary components: (i) a per-layer adapter to support fast learning from individual layers, and (ii) an associative memory to support remembering, updating, and recalling repeating events. Extensive experiments on real and synthetic datasets validate FSNet's efficacy and robustness to both new and recurring patterns.

1. INTRODUCTION

Time series forecasting plays an important role in both research and industries. Correctly forecast time series can greatly benefit various business sectors such as traffic management and electricity consumption (Hyndman & Athanasopoulos, 2018) . As a result, tremendous efforts have been devoted to develop better forecasting models (Petropoulos et al., 2020; Bhatnagar et al., 2021; Triebe et al., 2021) , with a recent success of deep neural networks (Li et al., 2019; Xu et al., 2021; Yue et al., 2021; Zhou et al., 2021) thanks to their impressive capabilities to discover hierarchical latent representations and complex dependencies. However, such studies focus on the batch learning setting which requires the whole training dataset to be made available a priori and implies the relationship between the input and outputs remains static throughout. This assumption is restrictive in real-world applications, where data arrives in a stream and the input-output relationship can change over time (Gama et al., 2014) . In such cases, re-training the model from scratch could be time consuming. Therefore, it is desirable to train the deep forecaster online (Anava et al., 2013; Liu et al., 2016) using only new samples to capture the changing dynamic of the environment. Despite the ubiquitous of online learning in many real-world applications, training deep forecasters online remains challenging for two reaons. First, naively train deep neural networks on data streams requires many samples to converge (Sahoo et al., 2018; Aljundi et al., 2019a) because the offline training benefits such as mini-batches or training for multiple epochs are not available. Therefore, when a distribution shift happens (Gama et al., 2014) , such cumbersome models would require many samples to learn new concepts with satisfactory results. Overall, deep neural networks, although possess strong representation learning capabilities, lack a mechanism to facilitate successful learning on data streams. Second, time series data often exhibit recurrent patterns where one pattern could become inactive and re-emerge in the future. Since deep networks suffer from catastrophic forgetting (McCloskey & Cohen, 1989) , they cannot retain prior knowledge and result in inefficient learning of recurring patterns, which further hinders the overall performance. Consequently, online time series forecasting with deep models presents a promising yet challenging problem. To address the above limitations, we redically formulate online time series forecasting as an online, task-free continual learning problem (Aljundi et al., 2019a; b) . Particularly, continual learning requires balancing two objectives: (i) utilizing past knowledge to facilitate fast learning of current patterns; and (ii) maintaining and updating the already acquired knowledge. These two objectives closely match the aforementioned challenges and are usually referred to as the stability-plasticity dilemma (Grossberg, 1982) . With this connection, we develop an effective online time series forecasting framework motivated by the Complementary Learning Systems (CLS) theory (McClelland et al., 1995; Kumaran et al., 2016) , a neuroscience framework for human continual learning. Specifically, the CLS theory suggests that humans can continually learn thanks to the interactions between the hippocampus and the neocortex, which supports the consolidation, recall, and update such experiences to form a more general representation, which supports generalization to new experiences. This work develops FSNet (Fast-and-Slow learning Network) to enhance the sample efficiency of deep networks when dealing with distribution shifts or recurring concepts in online time series forecasting. FSNet's key idea for fast learning is its ability to always improve the learning at current steps instead of explicitly detecting changes in the environment. To do so, FSNet employs a perlayer adapter to model the temporal consistency in time series and adjust each intermediate layer to learn better, which in turn improve the learning of the whole deep network. In addition, FSNet further equip each adapter with an associative memory (Kaiser et al., 2017) to store important, recurring patterns observed. When encountering such events, the adapter interacts with its memory to retrieve and update the previous actions to further facilitate fast learning. Consequently, the adapter can model the temporal smoothness in time series to facilitate learning while its interactions with the associative memories support remembering and improving the learning of recurring patterns. In summary, our work makes the following contributions. First, we radically formulate learning fast in online time series forecasting with deep models as a continual learning problem. Second, motivated by the CLS theory, we propose a fast-and-slow learning paradigm of FSNet to handle both the fast changing and long-term knowledge in time series. Lastly, we conduct extensive experiments with both real and synthetic datasets to demonstrate FSNet's efficacy and robustness.

2. PRELIMINARY AND RELATED WORK

This section provides the necessary background of time series forecasting and continual learning.

2.1. TIME SERIES FORECASTING SETTINGS

Let X = (x 1 , . . . , x T ) ∈ R T ×n be a time series of T observations, each has n dimensions. The goal of time series forecasting is that given a look-back window of length e, ending at time i: X i,e = (x i-e+1 , . . . , x i ), predict the next H steps of the time series as f ω (X i,H ) = (x i+1 , . . . , x i+H ), where ω denotes the parameter of the forecasting model. We refer to a pair of look-back and forecast windows as a sample. For multiple-step forecasting (H > 1) we follow the standard approach of employing a linear regressor to forecast all H steps in the horizon simultaneously (Zhou et al., 2021) . Online Time Series Forecasting is ubiquitous is many real-world scenarios (Anava et al., 2013; Liu et al., 2016; Gultekin & Paisley, 2018; Aydore et al., 2019) due to the sequential nature of data. In this setting, there is no separation of training and evaluation. Instead, learning occurs over a sequence of rounds. At each round, the model receives a look-back window and predicts the forecast window. Then, the true answer is revealed to improve the model's predictions of the incoming rounds (Hazan, 2019) . The model is commonly evaluated by its accumulated errors throughout learning (Sahoo et al., 2018) . Due to its challenging nature, online time series forecasting exhibits several challenging sub-problems, ranging from learning under concept drifts (Gama et al., 2014) , to dealing with missing values because of the irregularly-sampled data (Li & Marlin, 2020; Gupta et al., 2021) . In this work, we focus on the problem of fast learning (in terms of sample efficiency) under concept drifts by improving the deep network's architecture and recalling relevant past knowledge. There is also a rich literature of Bayesian continual learning to address regression problems (Smola et al., 2003; Kurle et al., 2019; Gupta et al., 2021) . However, such formulation follow the Bayesian framework, which allows for forgetting of past knowledge and does not have an explicit mechanism for fast learning (Huszár, 2017; Kirkpatrick et al., 2018) . Moreover, such studies did not focus on deep neural networks and it is non-trivial to extend to the setting of our study.

2.2. CONTINUAL LEARNING

Continual learning (Kirkpatrick et al., 2017) is an emerging topic aiming to build intelligent agents that can learn a series of tasks sequentially, with only limited access to past experiences. A continual learner must achieve a good trade-off between maintaining the acquired knowledge of previous tasks and facilitating the learning of future tasks, which is also known as the stability-plasticity dilemma (Grossberg, 1982; 2013) . Due to its connections to humans' learning, several neuroscience frameworks have motivated the development of various continual learning algorithms. One popular framework is the complementary learning systems theory for a dual learning system (McClelland et al., 1995; Kumaran et al., 2016) . Continual learning methods inspired from the CLS theory augments the slow, deep networks with the ability to quickly learn on data streams, either via the experience replay mechanism (Lin, 1992; Riemer et al., 2019; Rolnick et al., 2019; Aljundi et al., 2019a; Buzzega et al., 2020) or via explicit modeling of each of the fast and slow learning components (Pham et al., 2021a; Arani et al., 2021) . Such methods have demonstrated promising results on controlled vision or language benchmarks. In contrast, our work addresses the online time series forecasting challenges by formulating them as a continual learning problem.

3. PROPOSED FRAMEWORK

This section formulates the online time series forecasting as a task-free online continual learning problem and details the proposed FSNet framework.

3.1. ONLINE TIME SERIES FORECASTING AS A CONTINUAL LEARNING PROBLEM

Our formulation is motivated by the locally stationary stochastic processes observation, where a time series can be split into a sequence of stationary segments (Vogt, 2012; Dahlhaus, 2012; Das & Nason, 2016) . Since the same underlying process generates samples from a stationary segment, we refer to forecasting each stationary segment as a learning task for continual learning. We note that this formulation is general and encompasses existing learning paradigms. For example, splitting into only one segment indicates no concept drifts, and learning reduces to online learning in stationary environments (Hazan, 2019) . Online continual learning (Aljundi et al., 2019a) corresponds to the case of there are at least two segments. Moreover, we also do not assume that the points of task switch are given to the model, which is a common setting in many continual learning studies (Kirkpatrick et al., 2017; Lopez-Paz & Ranzato, 2017) . Manually obtaining such information in real-world data can be expensive because of the missing or irregularly sampled data (Li & Mar-lin, 2020; Farnoosh et al., 2021) . Therefore, our formulation corresponds to the online, task-free continual learning formulation (Aljundi et al., 2019a; b; Hu et al., 2020; Cai et al., 2021) . We now discuss the differences between our formulation with existing studies. First, most existing task-free continual learning frameworks (Aljundi et al., 2019b; Pham et al., 2021a) are developed for image data, which vastly differs from time series. The input and label spaces of images are different (continuous vs discrete) while time series' input and output share the same real-valued space. Additionally, the image's label changes significantly across tasks while time series data changes gradually over time with no clear boundary. Moreover, time series exhibits strong temporal information among consecutive samples, which does not exist in image data. Therefore, it is nontrivial to simply apply existing continual learning methods to time series and successful solutions requires carefully handling unique characteristics from time series data. Second, time series evolves and old patterns may not reappear exactly in the future. Thus, we are not interested in remembering old patterns precisely but predicting how they will evolve. For example, we do not need to predict the electricity consumption over the last winter. But it is more important to predict the electricity consumption this winter, assuming that it is likely to have a similar pattern as the last one. Therefore, we do not need a separate test set for evaluation, but training follows the online learning setting where a model is evaluated by its accumulated errors throughout learning.

3.2. FAST AND SLOW LEARNING NETWORKS (FSNET)

FSNet always leverages past knowledge to improve the learning in the future (Section 3.2.1), which is akin to facilitating forward transfer in continual learning (Lopez-Paz & Ranzato, 2017) . Additionally, FSNet remembers repeating events and continue to learn them when they reappear (Section 3.2.2, which is akin to preventing catastrophic forgetting (Kirkpatrick et al., 2017) . We consider Temporal Convolutional Network (TCN) (Bai et al., 2018) as the backbone deep neural network to extract a time series feature representation due to the simple forward architecture and promising results (Yue et al., 2021) . The backbone has L layer with parameters θ = {θ l } L l=1 . FSNet improves the TCN backbone with two complementary components: a per-layer adapter ϕ l and a per-layer associative memory M l . Thus, the total trainable parameters is ω = {θ l , ϕ l } L l=1 and the total associative memory is M = {M l } L l=1 . We also use h l and hl to denote the original feature and adapter feature map of the l-layer. Figure 1 provides an illustration of FSNet.

3.2.1. FAST LEARNING MECHANISM

The key observation allowing for a fast learning is to facilitate the learning of each intermediate layer via the following observation: the partial derivative ∇ θ l ℓ characterizes the contribution of layer θ l to the forecasting loss ℓ. Traditional training schemes simply move the parameters along this gradient direction, which results in ineffective online learning (Sahoo et al., 2018; Phuong & Lampert, 2019) . Moreover, time series data exhibits strong temporal consistency across consecutive samples, which is not captured by existing training frameworks. Putting these observations together, we argue that an exponential moving average (EMA) of the partial derivative can provide meaningful information about the temporal smoothness in time series. Consequently, leveraging this knowledge can improve the learning of each layer, which in turn improves the whole network's performance. To utilize the gradient EMA, we propose to treat it as a context to support fast learning via the feature-wise transformation framework (Perez et al., 2018; Dumoulin et al., 2018; Pham et al., 2021b; Yin et al., 2021) . Particularly, we propose to equip each layer with an adapter to map the layer's gradient EMA to a set of smaller, more compact transformation coefficients. These coefficients are applied on the corresponding layer's parameters and feature so that they can leverage the temporal consistency to learn better. We first define the EMA of the l-layer's partial derivative as: ĝl ← γ ĝl + (1 -γ)g t l , where g t l denotes the gradient of the l-th layer at time t and ĝl denotes the EMA. The adapter takes ĝl as input and maps it to the adaptation coefficients u l . Then, an adapter for the l-th layer is a linear layer that maps the context ĝl to a set of transformation coefficients u l = [α l ; β l ]. In this work, we consider a two-stage transformations (Yin et al., 2021) which involve a weight and bias transformation coefficients α l and a feature transformation coefficients β l . The adaptation process for a layer θ l is summarized as: [α l , β l ] =u l , where u l = Ω(ĝ l ; ϕ l ) Weight adaptation: θl =tile(α l ) ⊙ θ l , and Feature adaptation: hl =tile(β l ) ⊙ h l , where h l = θl ⊛ hl-1 . Here, h l is a stack of I features maps with C channels and length Z, hl is the adapted feature, θl denotes the adapted weight, ⊙ denotes the element-wise multiplication, and tile(α l ) denotes that the weight adaptor is applied per-channel on all filters via a tile function that repeats a vector along the new axes. A naive implementation of Equation 2 directly maps the model's gradient to the adaptation coefficients and results in a very high dimensional mapping. Therefore, we implement the chunking operation (Ha et al., 2016) to split the gradient into equal size chunks and then maps each chunk to an element of the adaptation coefficients. We denote this chunking operator as Ω(•; ϕ l ) and provide the detailed description in Appendix C.

3.2.2. REMEMBERING RECURRING EVENTS WITH AN ASSOCIATIVE MEMORY

In time series, old patterns may reappear and it is imperative to leverage our past actions to improve the learning outcomes. In FSNet, an adaptation to a pattern is represented by the coefficients u, which we argue to be useful to learn repeating events. Specifically, u represents how we adapted to a particular pattern in the past; thus, storing and retrieving the appropriate u may facilitate learning the corresponding pattern when they reappear. Therefore, as the second key element in FSNet, we implement an associative memory to store the adaptation coefficients of repeating events encountered during learning. In summary, beside the adapter, we equip each layer with an additional associative memory M l ∈ R N ×d where d denotes the dimension of u l , and N denotes the number of elements, which we fix as N = 32 by default. Sparse Adapter-Memory Interactions Interacting with the memory at every step is expensive and susceptible to noises. Thus, we propose to trigger this interaction subject to a substantial representation change. Interference between the current and past representations can be characterized in terms of a dot product between the gradients (Lopez-Paz & Ranzato, 2017; Riemer et al., 2019) . To this end, together with the gradient EMA in Equation 2, we deploy another gradient EMA ĝ′ l with a smaller coefficient γ ′ < γ and measure their cosine similarity to trigger the memory interaction as: Trigger if : cos(ĝ l , ĝ′ l ) = ĝl • ĝ′ l ||ĝ l || ||ĝ l || < -τ, where τ > 0 is a hyper-parameter determining the significant degree of interference. Moreover, we want to set τ to a relatively high value (e.g. 0.75) so that the memory only remembers significant changing patterns, which could be important and may reappear. The Adapter-Memory Interacting Mechanism Since the current adaptation coefficients may not capture the whole event, which could span over a few samples, we perform the memory read and write operations using the adaptation coefficients's EMA (with coefficient γ ′ ) to fully capture the current pattern. The EMA of u l is calculated in the same manner as Equation 1. When a memory interaction is triggered, the adapter queries and retrieves the most similar transformations in the past via an attention read operation, which is a weighted sum over the memory items: 1. Attention calculation: r l = softmax(M l ûl ); 2. Top-k selection: r (k) l = TopK(r l ); 3. Retrieval: ũl = K i=1 r (k) l [i]M l [i], where r (k) [i] denotes the i-th element of r (k) l and M l [i] denotes the i-th row of M l . Since the memory could store conflicting patterns, we employ a sparse attention by retrieving the top-k most relevant memory items, which we fix as k = 2. The retrieved adaptation coefficient characterizes how the model reacted to the current pattern in the past, which can improve learning at the present by combining with the current parameters as u l ← τ u l + (1 -τ ) ũt , where we use the same value of τ as in Equation 5. Then we perform a write operation to update the knowledge stored in M l as: M l ←τ M l + (1 -τ ) ûl ⊗ r (k) l and M l ← M l max(1, ||M l || 2 ) , where ⊗ denotes the outer-product operator, which allows us to efficiently write the new knowledge to the most relevant locations indicated by r et al., 2016; Kaiser et al., 2017) . The memory is then normalized to avoid its values exploding. We provide FSNet's pseudo code in Appendix C.2. (k) l (Rae

4. EXPERIMENT

Our experiments aim at investigating the following hypotheses: (i) FSNet facilitates faster adaptation to both new and recurring concepts compared to existing strategies with deep models; (ii) FSNet achieves faster and better convergence than other methods; and (iii) modeling the partial derivative is the key ingredients for fast adaptation. Due to space constraints, we provide the key information of the experimental setting in the main paper and provide full details, including memory analyses, additional visualizations and results in the Appendix.

4.1. EXPERIMENTAL SETTINGS

Datasets We explore a wide range of time series forecasting datasets. ETTfoot_0 (Zhou et al., 2021) records the target value of "oil temperature" and 6 power load features over a period of two years. We consider the ETTh2 and ETTm1 benchmarks where the observations are recorded hourly and in 15-minutes intervals respectively. ECL (Electricty Consuming Load)foot_1 collects the electricity consumption of 321 clients from 2012 to 2014. Trafficfoot_2 records the road occupancy rates at San Francisco Bay area freeways. Weatherfoot_3 records 11 climate features from nearly 1,600 locations in the U.S in an hour intervals from 2010 to 2013. We also construct two synthetic datasets to explicitly test the model's ability to deal with new and recurring concept drifts. We synthesize a task by sampling 1, 000 samples from a first-order autoregressive process with coefficient φ: AR φ (1), where different tasks correspond to different φ values. The first synthetic data, S-Abrupt (S-A), contains abrupt, and recurrent concepts where the samples abruptly switch from one AR process to another by the following order: AR 0.1 (1), AR 0.4 (1), AR 0.6 (1), AR 0.1 (1), AR 0.3 (1), AR 0.6 (1). The second data, S-Gradual (S-G) contains gradual, incremental shifts, where the shift starts at the last 20% of each task. In this scenario, the last 20% samples of a task is an averaged of two AR process with the order as above. Note that we randomly chose the values of φ so that these datasets do not give unfair advantages to any methods. Baselines We consider a suite of baselines from continual learning, time series forecasting, and online learning. First, the OnlineTCN strategy that simply trains continuously (Zinkevich, 2003) . Second, we consider the Experience Replay (ER) (Lin, 1992; Chaudhry et al., 2019) strategy where a buffer is employed to store previous samples and interleave them during the learning of newer ones. We also include three recent advanced variants of ER. First, TFCL (Aljundi et al., 2019b ) introduces a task-boundaries detection mechanism and a knowledge consolidation strategy by regularizing the networks' outputs (Aljundi et al., 2018) . Second, MIR (Aljundi et al., 2019a) replace the random sampling in ER by selecting samples that cause the most forgetting. Lastly, DER++ (Buzzega et al., 2020) augments the standard ER with a knowledge distillation strategy (Hinton et al., 2015) . We emphasize that ER and and its variants are strong baselines in the online setting since they enjoy the benefits of training on mini-batches, which greatly reduce noises from singe samples and offer faster, better convergence (Bottou et al., 1998) . While the aforementioned baselines use a TCN backbone, we also include Informer (Zhou et al., 2021) , a recent time series forecasting method based on the transformer architecture (Vaswani et al., 2017) . We remind the readers that online time series forecasting have not been widely studied with deep models, therefore, we include general strategies from related fields that we inspired from. Such baselines are competitive and yet general enough to extend to our problem.

Implementation Details

We split the data into warm-up and online training phases by the ratio of 25:75. We follow the optimization details in (Zhou et al., 2021) by optimizing the ℓ 2 (MSE) loss with the AdamW optimizer (Loshchilov & Hutter, 2017) . In the warm-up phase, we calculate the statistics to normalize online training samples, perform hyper-parameter cross-validation, and pre- train the models for the few methods. During online learning, both the epoch and batch size are set to one to follow the online learning setting. These configurations are applied to all baselines. We implement a fair comparison by making sure that all baselines use the same total memory budget as our FSNet, which includes three-times the network sizes: one working model and two EMA of its gradient. Thus, we set the buffer size of ER, MIR, and DER++ to meet this budget while increasing the backbone size of the remaining baselines. Lastly, for all benchmarks, we set the look-back window length to be 60 and vary the forecast horizon as H ∈ {1, 24, 48}. bers are averaged over five runs, and we provide the standard deviations in Table 3 , Appendix E.1. We observe that ER and its variants (MIR, DER++) are strong competitors and can significantly improve over the simple TCN strategies. However, such methods still cannot work well under multiple task switches (S-Abrupt). Moreover, no clear task boundaries (S-Gradual) presents an even more challenging problem and increases most models' errors. In addition, previous work has observed that TCN can outperform Informer in the standard time series forecasting (Woo et al., 2022) .

Cumulative Performance

Here we also observe similar results that Informer does not perform well in the online setting, and is outperformed by other baselines. On the other hand, our FSNet shows promising results on all datasets and outperforms most competing baselines across different forecasting horizons. Moreover, the significant improvements on the synthetic datasets indicate FSNet's ability to quickly adapt to the non-stationary environment and recall previous knowledge, even without clear task boundaries. 2 reports the convergent behaviors on the considered methods. We omit the S-Gradual dataset for spaces because we observe the same behavior as S-Abrupt. Interestingly, we observe that concept drifts are likely to happened in most datasets because of the loss curves' sharp peaks. Moreover, such drifts appear at the early stage of learning, mostly in the first 40% of data, while the remaining half of data are quite stationary. This result shows that the traditional batch training is often too optimistic by only testing the model on the last data segment. The results clearly show the benefits of ER by offering faster convergence during learning compared to OnlineTCN. However, storing the original data may not be applicable in many applications. On S-Abrupt, most baselines demonstrate the inability to quickly recover from concept drifts, indicated by the increasing trend in the error curves. We also observe promising results of FSNet on most datasets, with significant improvements over the baselines on the ETT, WTH, and S-Abrupt datasets. The remaining datasets are more challenging with missing values (Li et al., 2019) and large magnitude varying within and across dimensions, which may require calculating better data normalization statistics. While FSNet achieved encouraging results, handling the above challenges can further improve its performance. Overall, the results shed light on the challenges of online time series forecasting and demonstrate promising results of FSNet.

Convergent behaviors of Different Learning Strategies Figure

Visualization We explore the model's prediction quality on the S-Abrupt since it is a univariate time series. The remaining multivariate real-world datasets are more challenging to visualize. Par- ticularly, we are interested in the models' behaviour when an old task's reappear. Therefore, in Figure 3 , we plot the model's forecasting at various time points after t = 3000. We can see the difficulties of training deep neural networks online in that the model struggles to learn at the early stages, where it only observed a few samples. We focus on the early stages of task switches (e.g. the first 200 samples), which requires the model to quickly adapt to the distribution shifts. With the limited samples per task and the presence of multiple concept drifts, the standard online optimization collapsed to a naive solution of predicting random noises around zero. However, FSNet can successfully capture the time series' patterns and provide better forecasts as learning progresses. Overall, we can clearly see FSNet can provide better quality forecasts compared to other baselines.

4.3. ABLATION STUDIES OF FSNET'S DESIGN

This experiment analyzes the contribution of each FSNet's component. First, we explore the benefits of using the associative memory (Section 3.2.2) by constructing a No Memory variant that only uses an adapter, without the memory. Second, we further remove the adapter, which results in the Naive variant that directly trains the adaptation coefficients u jointly with the backbone. The Naive variant demonstrates the benefits of monitoring the layer's gradients, our key idea for fast adaptation (Section 3.2.1). Lastly, we explore FSNet's scalability by increasing the associative memory size from 32 items (original) to a larger scale of 128 items. We report the results in Table 2 . We first observe that FSNet achieves similar results with the No Memory variant on the Traffic and S-Gradual datasets. One possible reason is the insignificant representation interference in the Traffic dataset and the slowly changing representations in the S-Gradual dataset. In such cases, the representation changes can be easily captured by the adapter alone and may not trigger the memory interactions. In contrast, on ETTh2 and S-Abrupt, which may have sudden drifts, we clearly observe the benefits of storing and recalling the model's past action to facilitate learning of repeating events. Second, the Naive variant does not achieve satisfactory results, indicating the benefits of modeling the temporal smoothness in time series via the use of gradient EMA. Lastly, the large memory variant of FSNet provides improvements in most cases, indicating FSNet's scalability with more budget. Overall, these results demonstrated the complementary of each FSNet's components to deal with different types of concept drift in time series.

5. CONCLUSION

We have investigated the potentials and limitations of training deep neural networks for online time series forecasting in non-stationary environments, where they lack the capability to adapt to new or recurring patterns quickly. We then propose Fast and Slow learning Networks (FSNet) by extending the CLS theory for continual learning to online time series forecasting. FSNet augments a neural network backbone with two key components: (i) an adapter for fast learning; and (ii) an associative memory to handle recurrent patterns. Moreover, the adapter sparsely interacts with its memory to store, update, and retrieve important recurring patterns to facilitate learning of such events in the future. Extensive experiments demonstrate the FSNet's capability to deal with various types of concept drifts to achieve promising results in both real-world and synthetic time series data.

B EXTENDED RELATED WORK

This section is an extended version of Section 2 where we discuss in more details the existing time series forecasting and continual learning studies.

B.1 TIME SERIES FORECASTING

Time series forecasting is an important problem and has been extensive studied in the literature. Traditional methods such as ARMA, ARIMA (Box et al., 2015) , and the Holt-Winters seasonal methods (Holt, 2004 ) enjoy theoretical guarantees. However, they lack the model capacity to model more complex interactions of real-world data. As a result, they cannot handle the complex interactions among different dimensions of time series, and often achieve inferior performances compared to deep neural networks on multivariate time series data (Zhou et al., 2021; Oreshkin et al., 2019) . Recently, learning a good time series representation has shown promising results, and deep learning models have surpassed such traditional methods on large scale benchmarks (Rubanova et al., 2019; Zhou et al., 2021; Oreshkin et al., 2019) . Early deep learning based approaches built upon a standard MLP models (Oreshkin et al., 2019) or recurrent networks such as LSTMs (Salinas et al., 2020) . Recently, temporal convolution (Yue et al., 2021) and transformer (Li et al., 2019; Xu et al., 2021) networks have shown promising results, achieving promising on a wide range of real-world time series. However, such methods assume a static world and information to forecast the future is fully provided in the look-back window. As a result, they lack the ability to remember events beyond the look-back window and adapt to the changing environments on the fly. In contrast, our FSNet framework addresses these limitation by a novel adapter and memory components.

B.2 CONTINUAL LEARNING

Human learning has inspired the design of several strategies to enable continual learning in neural networks. One successful framework is the Complementary Learning Systems theory (McClelland et al., 1995; Kumaran et al., 2016) which decomposes learning into two processes of learning fast (hippocampus) and slow (neocortex). While the hippocampus can quickly change and capture the current information, possibly with the help of experience replay, the neocortex does not change as fast and only accumulate general knowledge. The two learning systems interacts via a knowledge consolidation process, where recent experiences in the hippocampus are transferred to the neocortex to form a more general representation. In addition, the hippocampus also queries information from the neocortex to facilitate the learning of new and recurring events. The CLS theory serves as a motivation for several designs in continual learning such as experience replay (Chaudhry et al., 2019) , dual learning architectures (Pham et al., 2021a; Arani et al., 2021) . In this work, we extend the fast and slow learning framework of the CLS theory to the online time series forecasting problem.

B.3 COMPARISON WITH EXISTING CONTINUAL LEARNING FOR TIME SERIES FORMULATIONS

This section provides a more comprehensive comparison between our formulation of online time series forecasting and existing studies in (He & Sick, 2021; Jaeger, 2017; Gupta et al., 2021; Kurle et al., 2019) . We first summarize the scope of our study. We mainly concern the online time series forecasting problem (Liu et al., 2016) and focus on addressing the challenge of fast adaptation to distribution shifts in this scenario. Particularly, when such distribution shifts happen, the model is required to take less training samples to achieve low errors, either by exploiting its representation capabilities or reusing the past knowledge. We focus on the class of deep feedforward neural network, particularly TCN, thanks to its powerful representation capabilities and ubiquitous in sequential data applications (Bai et al., 2018) . CLeaR (He & Sick, 2021 ) also attempted to model time series forecasting as a continual learning problem. However, CLeaR focuses on accumulating knowledge over a data stream without forgetting and does not concern about a fast adaptation under distribution shifts. Particularly, CLeaR's online training involves periodically calibrating the pre-trained model on new out-of-distribution samples using a continual learning strategy. Moreover, CLeaR only calibrates the model when a buffer of novel samples are filled. As a result, when a distribution shifts, CLeaR could suffer from arbitrary high errors until it accumulates enough samples for calibrating. Therefore, CLeaR is not applicable to the online time series forecasting problem considered in our study. GR-IG (Gupta et al., 2021) also formulates time series forecasting as a continual learning problem. However, they address the challenging of variable input dimensions through time, which could arise from the introduction of new sensors, or sensor failures. Therefore, by motivating from continual learning, GR-IG can facilitate the learning of new tasks (sensors) for better forecasting. However, GR-IG does not consider shifts in the observed distributions and focus on learning new distributions that appear over time. Consequently, GR-IG is also not a direct comparison to our method. Lastly, we also note Conceptors (Jaeger, 2017) as a potential approach to address the time series forecasting problem. Conceptors are a class of neural memory that supports storing and retrieving patterns learned by a recurrent network. In this work, we choose to use the associative memory to maintain long-term patterns, which is more common for deep feed-forward architectures used in our work. We believe that with necessary adaptation, it is possible to integrate Conceptors as the memory mechanism in FSNet, which is beyond the scope of this work.

C FSNET DETAILS

C.1 CHUNKING OPERATION In this section, we describe the chunking adapter's chunking operation to efficiently compute the adaptation coefficients. For convenient, we denote vec(•) as a vectorizing operation that flattens a tensor into a vector; we use split(e, B) to denote splitting a vector e into B segments, each has size dim(e)/B. An adapter maps its backbone's layer EMA gradient to an adaptation coefficient u ∈ R d via the chunking process as: ĝl ←vec(ĝ l ) [b 1 , b 2 , . . . b d ] ←reshape(ĝ l ; d) [h 1 , h 2 , . . . , h d ] ←[W (1) ϕ b 1 , W (1) ϕ b 2 , . . . , W (1) ϕ b d ] [u 1 , u 2 , . . . , u d ] ←[W (2) ϕ h 1 , W (2) ϕ h 2 , . . . , W (2) ϕ h d ]. Where we denote W (1) ϕ and W (2) ϕ as the first and second weight matrix of the adapter. In summary, the chunking process can be summarized by the following steps: (1) flatten the gradient EMA into a vector; (2) split the gradient vector into d chunks; (3) map each chunk to a hidden representation; and (4) map each hidden representation to a coordinate of the target adaptation parameter u.

C.2 FSNET PSEUDO ALORITHM

Algorithm 1 provides the psedo-code for our FSNet. 

Model and Total complexity

We analyze the model and the total memory complexity, which arises from the model and additional memory units. First, the standard TCN forecaster incur a O(N +H) memory complexity arising from N parameters of the convolutional layers, and an order of H parameters from the linear regressor. Second, we consider the replayed-based strategies, which also incur the same O(N + H) model complexity as the OnlineTCN. For the total memory, they use an episodic memory to store the previous samples, which costs O(E + H) for both methods. Additionally, TFCL stores the importance of previous parameters while MIR makes a copy of the model for its virtual update, both of which cost O(N +H). Therefore, the total memory complexity of the replay strategies (ER, DER++, MIR, and TFCL) is O(N + E + H). Third, in FSNet, both the per-layer adapters and the associative memory cost similar number of parameters as the convolutional layers because they are matrices with number of channels as one dimension. Therefore, asymptotically, FSNet also incurs a model and total complexity of O(N +H) where the constant term is small. Table 5 summarizes the asymptotic memory complexity discussed so far. Table 4 shows the number of parameters used of different strategies on the ETTh2 dataset with the forecast window of H = 24. We consider the total parameters (model and memory) of FSNet as the total budget and adjust other baselines to meet the budget. As we analyzed, for FSNet, its components, including the adapter, associative memory, and gradient EMA, require an order of parameter as the convolutional layers in the backbone network. For the OnlineTCN strategy, we increases the number of convolutional filters so that it has roughly the same total parameters as FSNet. For ER and TFCL, we change the number of samples stored in the episodic memory.

Time Complexity

We report the throughput (samples/second) of different methods in Table 6 . We can see that ER and DER++ have high throughput (low running time) compared to others thanks to their simplicity. As FSNet introduces additional mechanisms to allow the network to take less samples to adapt to the distribution shifts, its throughput is lower than ER and DER++. Neverthe- 

E.3 ROBUSTNESS OF HYPER-PARAMETER SETTINGS

This experiment explores the robustness of FSNet to different hyper-parameter setting. Particularly, we focus on the configuration of three hyper-parameters: (i) the gradient EMA γ; (ii) the short-term gradient EMA γ ′ ; and (iii) the associative memory activation threshold τ . In general, we provide two guidelines to reduce the search space of these hyper-parameters: (i) setting γ to a high value (e.g. 0.9) and γ ′ to a small value (e.g. 0.3 or 0.4); (ii) set τ to be relatively high (e.g. 0.75). We report the results of several hyper-parameter configurations in Table 7 . We observe that there are not significant differences among these configurations . It is also worth noting that we use the same configuration for all experiments conducted in this work. Therefore, we can conclude that FSNet is robust to these configurations.

E.4 FSNET AND EXPERIENCE REPLAY

This experiment explore the complementarity between FSNet and experience replay (ER). We hypothesize that ER is a valuable component when learning on data streams because it introduces the benefits of mini-batch training to online learning. We implement a variant of FSNet with an episodic memory for experience replay and report its performance in Table 8 . We can see that FSNet+ER outperforms FSNet in all cases, indicating the benefits of ER, even to FSNet. However, it is important that using ER will introduce additional memory complexity and that scales with the look-back window. Lastly, in many real-world applications, storing previous data samples might be prohibited due to privacy concerns.

E.5 VISUALIZATIONS E.5.1 VISUALIZATION OF THE SYNTHETIC DATASETS

We plot the raw data (before normalization) of the S-Abrupt and S-Gradual datasets in Figure 4 . 

E.5.2 ACTIVATION PATTERN OF FSNET

This experiment explores the associative memory activation patterns of FSNet. For this, we consider the S-Abrupt dataset with H = 1 and plot the activation patterns in Figure 5 . Note that due to the large number of memory slots, we only plot the memory slot with the highest attention score at each step. We remind that in S-Abrupt, the first 3,000 samples belong to three different data distribution and these distribution sequentially reappear in the last 3,000 samples, which are color-coded in Figure 5 . First, we observe that not all layers are equally important for the tasks. Particularly, FSNet mostly uses the fourth and sixth layers, and rarely uses the deeper ones. Second, we note that FSNet memory activations exhibit high specialization as we go to deeper layers. Particularly, only a single memory slot is activated in the fourth layer (circle marker) throughout training because shallow layers are responsible for general representations, possibly because it learns generic representations for all patterns. On the other hand, deeper layers are activated according to different distributions: seventh layer memory (triangle marker) is activated by the distribution in pink while the ninth layer memory (square and star markers) is activated by the remaining distributions. These observations are consistent with the representation learning patterns in deep networks where shallow layers learn generic representation while deeper layers learn representations that are more specialized to different patterns (Olah et al., 2017) . F DISCUSSION AND FUTURE WORK We discuss two scenarios where FSNet may not work well. First, we suspect that FSNet may struggle when concept drifts do not happen uniformly on all dimensions. This problem arises from the irregularly sampled time series, where each dimension is sampled at a different rate. In this scenario, a concept drift in one dimension may trigger FSNet's memory interaction and affect the learning of the remaining ones. Moreover, if a dimension is sampled too sparsely, it might be helpful to leverage the relationship along both the time and spatial dimension for a better result. Second, applications such as finance, which involve many complex repeating patterns, can be challenging for FSNet. In such cases, the number of repeating patterns may exceed the memory capacity of FSNet, causing catastrophic forgetting. In addition, forecasting complex time series requires the network to learn a good representation, which may not be achieved by increasing the model complexity alone. In such cases, incorporating a representation learning component might be helpful. We now discuss several aspects for further studies. We follow Informer to apply the z-normalization per feature, which is a common strategy. This strategy works well in the batch setting because its statistics were estimated using 80% of training data. However, after a concept drift in online learning, it is unreliable to use previous statistics (estimated over 25% samples) to normalize samples from a new distribution. In such cases, it could be helpful to adaptively normalize samples from new distributions (using the new distribution's statistics). This could be achieved via an online update of the normalization statistics or using a sliding window technique. In addition, while FSNet presents a general framework to forecast time series online, adopting it to a particular application requires incorporating specific domain knowledge to ensure satisfactory performances. In summary, we firmly believe that FSNet is an encouraging first step towards a general solutions for an important, yet challenging problem of online time series forecasting.



https://github.com/zhouhaoyi/ETDataset https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014 https://pems.dot.ca.gov/ https://www.ncei.noaa.gov/data/local-climatological-data/



Figure 1: An overview of FSNet. A standard TCN backbone (a) of L dilated convolution stacks (b). Each convolution filter in FSNet is equipped with an adapter and associative memory to facilitate fast learning by monitoring the backbone's gradient EMA. Best viewed in colors.

Figure 2: Evolution of the cumulative MSE loss during training with forecasting window H = 24. In Figure 2.f, each color region denotes a data generating distribution. Best viewed in color.

Figure3: Visualization of the model's prediction throughout the online learning process. We focus on a short horizon of 200 time steps after a concept drift, which is critical for fast learning.

Figure 4: Visualization of the raw S-Abrupt and S-Gradual datasets before normalization. Colored regions indicate the data generating distribution where we use the same color for the same distribution. In S-Guadual, white color region indicates the gradual transition from one distribution to another. Best viewed in color.

Figure 5: Activation frequency of the memory slot with the highest attention score for each layer in FSNet on the S-Abrupt dataset. Same marker indicates the same memory slot. Each color region indicates a data generating distribution. Best viewed in color.

Final cumulative MSE and MAE of different methods, "-" indicates the model did not converge. S-A: S-Abrupt, S-G: S-Gradual. Best results are in bold.

Final comulative MSE and MAE of different FSNet variants. Best results are in bold.

Summary of the model complexity on the ETTh2 data set with forecasta window H = 24. We report the number of floating points incurred by the backbone and different types of memory. GI = Gradient Importance (TFCL), G-EMA = Gradient Exponential Moving Average (FSNet), AM = Associative Memory (FSNet), EM = Episodic Memory (ER).

Summary of the model and total memory complexity of different methods.N denotes the number parameters of the convolutional layers, H and E denotes the look-back and forecast win-Asymptotic analysis We consider the TCN forecaster used throughout this work and analyze the model, total memory, and time complexities of the methods considered in our work. We let N denotes the number of parameters of the the convolutional layers, E denotes the length of the lookback window, and H denotes the length of the forecast window.

Throughput (sample/second) of different methods in our experiments with forecast window of H = 1. FSNet is more efficient than and MIR comparable to TFCL, which are two common continual learning strategies.

Performance of FSNet with and without experience replay.

availability

Our code is publicly available at: https://github.com/salesforce/fsnet/.

ETHIC STATEMENT

In this work, we used the publicly available datasets for experiments. We did not collect human or animal data during this study. Due to the abstract nature of this work, our method does not raise concerns regarding social/gender biases or privacy.Algorithm 1 Fast and Slow learning Networks (FSNet) Require: Two EMA coefficients γ ′ < γ, memory interaction threshold τ Init: backbone θ, adapter ϕ, associative memory M, regressor R, trigger = False 1 for t ← 1 to T do 2 Receive the tlook-back window x t We use the following first-order auto-regressive process model AR φ (1) defined aswhere ϵ t are random noises and X t-1 are randomly generated. The S-Abrupt data is described by the following equation:The S-Gradual data is described as • OnlineTCN uses a standard TCN backbone (Woo et al., 2022) with 10 hidden layers, each of which has two stacks of residual convolution filters.• ER (Chaudhry et al., 2019) augments the OnlineTCN baseline with an episodic memory to store previous samples, which are then interleaved when learning the newer ones.• MIR (Aljundi et al., 2019a) replaces the random sampling strategy in ER with its MIR sampling by selecting samples in the memory that cause the highest forgetting and perform ER on these samples.• DER++ (Buzzega et al., 2020) augments the standard ER (Chaudhry et al., 2019) with a ℓ 2 knowledge distillation loss on the previous logits.• TFCL (Aljundi et al., 2019b ) is a method for online, task-free continual learing. TFCL starts with as a ER procedure and also includes a MAS-styled (Aljundi et al., 2018) regularization that is adapted for the task-free setting.All ER-based strategies use a reservoir sampling buffer. We also tried with a Ring buffer and did not observe any significant differences.Loss function All methods in our experiments optimize the ℓ 2 loss function defined as follows.Let x and y ∈ R H be the look-back and ground-truth forecast windows, and ŷ be the model's prediction of the true forecast windows. The ℓ 2 loss is defined as:Experience Replay baselines We provide the training details of the ER and DER++ baselines in this section. These baselines deploy an reservoir sampling buffer of 500 samples to store the observed samples (each sample is a pair of look-back and forecast window).Let M be the episodic memory storing previous samples, B t be a mini-batch of samples sampled from M. ER minimizes the following loss function:where ℓ(•, •) denotes the MSE loss and λ ER is the trade-off parameter of current and past examples. DER++ further improves ER by adding a distillation loss (Hinton et al., 2015) . For this purpose, DER++ also stores the model's forecast into the memory and minimizes the following loss:

D.2 HYPER-PARAMETERS SETTINGS

We cross-validate the hyper-parameters on the ETTh2 dataset and use it for the remaining ones. Particularly, we use the following configuration:• Adapter's EMA coefficient γ = 0.9,• Gradient EMA for triggering the memory interaction γ ′ = 0.3• Memory triggering threshold τ = 0.75We found that this hyper-parameter configuration matches the motivation in the development of FSNet. In particular, the adapter's EMA coefficient γ = 0.9 can capture medium-range information to facilitate the current learning. Second, the gradient EMA for triggering the memory interaction γ ′ = 0.3 results in the gradients accumulated in only a few recent samples. Lastly, a relatively high memory triggering threshold τ = 0.75 indicates our memory-triggering condition can detect substantial representation change to store in the memory. The hyper-parameter cross-validation is performed via grid search and the grid is provided below.• Experience replay batch size (for ER and DER++): [2, 4, 8]• Experience replay coefficient (for ER) λ ER : [0.1, 0.2, 0.5, 0.7, 1]• DER++ coefficient (for DER++) λ DER++ : [0.1, 0.2, 0.5, 0.7, 1]• EMA coefficient for FSNet γ and γ ′ : [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]• Memory triggering threshold τ : [0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9]• Number of filters per layer: 64• Episodic memory size: 5000 (for ER, MIR, and DER++), 50 (for TFCL)The remaining configurations such as data pre-processing and optimizer setting follow exactly as Zhou et al. (2021) .

E.1 STANDARD DEVIATIONS

We report the standard deviation values of the comparison experiment in Table 1 , which were averaged over five runs. Overall, we observe that the standard deviation values are quite small for all experiments.

E.2 COMPLEXITY COMPARISON

In this Section, we analyze the memory and time complexity of FSNet.

