LEARNING IN TEMPORALLY STRUCTURED ENVIRONMENTS

Abstract

Natural environments have temporal structure at multiple timescales. This property is reflected in biological learning and memory but typically not in machine learning systems. We advance a multiscale learning method in which each weight in a neural network is decomposed as a sum of subweights with different learning and decay rates. Thus knowledge becomes distributed across different timescales, enabling rapid adaptation to task changes while avoiding catastrophic interference. First, we prove previous models that learn at multiple timescales, but with complex coupling between timescales, are equivalent to multiscale learning via a reparameterization that eliminates this coupling. The same analysis yields a new characterization of momentum learning, as a fast weight with a negative learning rate. Second, we derive a model of Bayesian inference over 1/f noise, a common temporal pattern in many online learning domains that involves long-range (power law) autocorrelations. The generative side of the model expresses 1/f noise as a sum of diffusion processes at different timescales, and the inferential side tracks these latent processes using a Kalman filter. We then derive a variational approximation to the Bayesian model and show how it is an extension of the multiscale learner. The result is an optimizer that can be used as a drop-in replacement in an arbitrary neural network architecture. Third, we evaluate the ability of these methods to handle nonstationarity by testing them in online prediction tasks characterized by 1/f noise in the latent parameters. We find that the Bayesian model significantly outperforms online stochastic gradient descent and two batch heuristics that rely preferentially or exclusively on more recent data. Moreover, the variational approximation performs nearly as well as the full Bayesian model, and with memory requirements that are linear in the size of the network. Assume a statistical model ŷ(t) = h(x(t), w(t)) and loss function L(y, ŷ), where x(t) is the input on step t, w(t) is the parameter estimate, ŷ(t) is the model output, and y(t) is the target output. In a

1. INTRODUCTION

Many online tasks facing both biological and artificial intelligence systems involve changes in data distribution over time. Natural environments exhibit correlations at a wide range of timescales, a pattern variously referred to as self-similarity, power-law correlations, and 1/f noise (Keshner, 1982) . This is in stark contrast with the iid environments assumed by many machine learning (ML) methods, and with diffusion or random-walk environments that exhibit only short-range correlations. Moreover, biological learning systems are well-tuned to the temporal statistics of natural environments, as seen in phenomena of human cognition including power laws in learning (Anderson, 1982) , power-law forgetting (Wixted & Ebbesen, 1997) , long-range sequential effects (Wilder et al., 2013) , and spacing effects (Anderson & Schooler, 1991; Cepeda et al., 2008) . An important goal is to incorporate similar inductive biases into ML systems for online or continual learning. This paper analyzes a framework for learning in temporally structured environments, multiscale learning, which for neural networks (NNs) can be implemented as a new kind of optimizer. A common explanation for self-similar temporal structure in nature is that it arises from a mixture of events at various timescales. Indeed, many generative models of 1/f noise involve summing independent stochastic processes with varying time constants (Eliazar & Klafter, 2009) . Accordingly, the multiscale optimizer comprises multiple learning processes operating in parallel at different timescales. In a NN, every weight w j is replaced by a family of subweights ω ij , each with its own learning rate and decay rate, that sum to determine the weight as a whole. Learning at multiple timescales is a key idea in several theories in neuroscience, including conditioning (Staddon et al., 2002) , learning (Benna & Fusi, 2016) , memory (Howard & Kahana, 2002; Mozer et al., 2009) , and motor control (Kording et al., 2007) , and has also been exploited in ML (Hinton & Plaut, 1987; Rusch et al., 2022) . The multiscale learner isolates and simplifies this idea, by assuming knowledge at different timescales evolves independently and that credit assignment follows gradient descent. The first part of this paper (Sections 2 and 3) proves three other models are formally equivalent to instances of the multiscale optimizer: a new variant of fast weights (cf. Ba et al., 2016; Hinton & Plaut, 1987) , the model synapse of Benna & Fusi (2016) , and momentum learning (Rumelhart et al., 1986; Qian, 1999) . The insight behind these proofs is that each of these models can be written in terms of a linear update rule with diagonalizable transition matrix. Thus the eigenvectors of this matrix correspond to states that evolve independently. By writing the state of the model as a mixture of eigenvectors, we effect a coordinate transformation that exactly yields the multiscale optimizer. These results imply that the complicated coupling among timescales assumed by some models can be superfluous. They also provide a new perspective on momentum learning, with implications for how and when it is beneficial and how it interacts with nonstationarity in the task environment. In Section 4, we provide a normative grounding for multiscale learning in terms of Bayesian inference over 1/f noise. Our starting point is a generative model of 1/f noise as a sum of diffusion processes at different timescales. Exact Bayesian inference with respect to this generative process is possible using a Kalman filter (KF) that tracks the component processes jointly (Kording et al., 2007) . When learning a single environmental parameter θ, such as mean reward for some action in a bandit task, this amounts to modeling θ(t) = n i=1 z i (t), where each z i is a diffusion process with a different characteristic timescale τ i , and doing joint inference over Z = (z 1 , . . . , z n ). We then generalize this approach to an arbitrary statistical model, h (x, θ), where x is the input and θ ∈ R m is a parameter vector to be estimated. For instance, h might be a NN with parameters θ. Our Bayesian model places a 1/f prior on θ (as a stochastic process), by assuming θ(t) = n i=1 z i (t) for diffusion processes z i ∈ R m with characteristic timescales τ i . We then do approximate inference over the joint state Z = (z 1 , . . . , z n ), using an extended Kalman filter (EKF) that linearizes h by calculating its Jacobian at each step (Singhal & Wu, 1989; Puskorius & Feldkamp, 2003) . Next, we derive a variational approximation to the EKF that constrains the covariance matrix to be diagonal, and show how it extends the multiscale optimizer. Specifically, writing w j and ω ij as the current mean estimates of θ j and z ij (for weight j and time scale i), the variational update to each ω ij follows that of the multiscale optimizer, with additional machinery for determining decay rates based on τ i and adapting learning rates based on the current prior variance s 2 ij (t). In Section 5, we test our methods in online prediction and classification tasks with nonstationary distributions. In online learning, nonstationarity often manifests as poorer generalization performance on future data versus held-out data from within the training interval. Common solutions are to train on a window of fixed length (to exclude "stale" data) or to use stochastic gradient descent (SGD) with fixed learning rate and weight decay, which leads older observations to have less influence (Ditzler et al., 2015) . Here, we demonstrate that performance can be significantly improved by retaining all data and using a learning model that accounts for the temporal structure of the environment. We introduce nonstationarity in our simulations by varying the latent data-generating parameters according to 1/f noise. Thus an important caveat is the task domains are matched to the Bayesian model. Notwithstanding, we test robustness by using a different set of timescales for task generation versus learning (Section 5.1), a generative process that mismatches the NN architecture (Section 5.2), and a construction of 1/f noise that differs from the sum-of-diffusion process the model assumes (Section 5.3). Results show the Bayesian methods (KF and EKF) outperform windowing and online SGD, as well as a novel heuristic of training the network on all past data with gradients weighted by recency. We also find the variational approximation performs nearly as well as the full model (Section 5.1) and scales well to a multilayer NN trained on real data (Section 5.3).  L = 1 2 (T -w) 2 . The weight is a sum of subweights ω slow (yellow) and ω fast (red). Initial learning is rapid, due to ω fast . Because of decay and the shared error signal, knowledge is gradually transferred to ω slow while ω fast returns to zero. When the task switches (trial 151), ω fast enables rapid adaptation while long-term knowledge is preserved in ω slow . Thus the model recovers quickly on the second reversal (compare blue curve beginning on trials 1 vs 156). The general multiscale optimizer extends this idea to an array of faster and slower weights. NN, w(t) is the vector of current weights. (Under the Bayesian framing in Section 4, w is the mean estimate of the optimal parameters θ.) For exposition, we assume the weights are updated by SGD, w(t + 1) = w(t) -α∇ w(t) L(y(t), ŷ(t)), (1) and we henceforth abbreviate the gradient as ∂ w(t) L. However, the following approach can be naturally composed with other optimizers, such as extensions of SGD or Hebbian learning, by replacing -α∂ w(t) L with the appropriate update term. The multiscale optimizer is motivated by the assumption that, in online learning tasks, the true or optimal parameters change over time, on multiple timescales. Accordingly, it expands each weight into a sum of subweights, w j = ω ij , each with a different learning rate α i and decay rate γ i . Here j indexes weights in w, and i indexes timescales. The subweights evolve according to: ω ij (t + 1) = γ i ω ij (t) -α i ∂ wj (t) L. Each ω ij has characteristic timescale τ i := (-log γ i ) -1 . Note that ∂ wj (t) L = ∂ ωij (t) L, so one can think of the gradient for w j being apportioned among the subweights (with total learning rate α = α i ), or equivalently of each subweight following its own gradient.

2.1. FAST WEIGHTS

A potentially important special case of multiscale learning arises with two timescales, w = ω slow + ω fast . We assume γ slow = 1 (no decay) and α fast > α slow . Thus each ω slow,j can be thought of as the original weight, which is augmented by ω fast,j , a second channel between the same neurons that both learns and decays rapidly. The fast weight enables the system to adapt quickly to distribution shifts while resisting catastrophic forgetting (Figure 1 ). This model is conceptually similar to the fast weights approach of Ba et al. (2016) and Hinton & Plaut (1987) . In that work, the weights are updated by a different mechanism (Hebbian learning) than the primary weights, and they act as a memory of recent hidden states in a recurrent network. In the present conception, fast weights optimize the same loss as the primary weights, only with different temporal properties, and they act as a memory for recent learning signals (e.g., loss gradients). Thus they are perhaps better suited for handling distribution shifts of the sort considered here.

3. EQUIVALENCE RESULTS

3.1 BENNA-FUSI SYNAPSE Benna & Fusi's (2016) model synapse is designed to capture how biochemical mechanisms in real synapses implement a cascading hierarchy of timescales, and has been adopted in ML for continual reinforcement learning (Kaplanis et al., 2018; 2019) . We consider a single weight w in a network, suppressing the index j. The Benna-Fusi model assumes that the information in w is maintained in a 1d hierarchy of variables u 1 , . . . , u n , each dynamically coupled to its immediate neighbors: C 1 (u 1 (t + 1) -u 1 (t)) = g 1 (u 2 (t) -u 1 (t)) -∂ w(t) L (3) C k (u k (t + 1) -u k (t)) = g k-1 (u k-1 (t) -u k (t)) + g k (u k+1 (t) -u k (t)) for 2 ≤ k ≤ n, with g n = 0. The external behavior of the synapse comes from u 1 alone (i.e., w = u 1 ), while u 2:n act as stores with progressively longer timescales. This update rule can be rewritten as u(t + 1) = T u(t) -d(t), (5) with transition matrix T determined by the coefficients in Equations 3 and 4, and external signal d(t) defined by d 1 (t) = 1 C1 ∂ w(t) L and d 2:n ≡ 0. It can be shown that the transition matrix is diagonalizable, T = V ΛV -1 , with eigenvalues Λ ii = λ i < 1 (see Appendix A). We can further enforce V 1• = 1, for a purpose explained below. We refer to the eigenvectors (columns V •i ) as modes of the system, because they are preserved over time up to a scalar. That is, if the initial state is proportional to mode i, then in the absence of external signal (d ≡ 0), the system will remain in that mode, decaying exponentially with rate factor λ i : u(0) ∝ V •i =⇒ ∀ t : u(t) = λ t i u(0) In general, any state can be written uniquely as a linear combination of modes, u = ω i V •i = V ω. Therefore, reparameterizing the model as ω := V -1 u yields the simplified update equation: ω(t + 1) = Λω(t) + V -1 d(t) where V -1 d(t) = 1 C1 [V -1 ] •1 ∂ w(t) L. Because Λ is diagonal, there is no cross-talk between the modes, unlike in the original dynamics. Thus we have derived an instance of the multiscale optimizer, with subweights ω i (t), decay rates λ i , and learning rates 1 C1 [V -1 ] i1 . The assumption above, V 1• = 1, implies w = u 1 = ω i , so the models agree on the external behavior of the weight as a whole. Figure 2 illustrates the translation between the two models.

3.2. MOMENTUM LEARNING

The standard rationale for momentum learning is to smooth updates over time, so that oscillations along directions of high curvature cancel out while progress can be made in directions with consistent gradients (Rumelhart et al., 1986) . To simplify notation, we again focus on a single weight w in the network, suppressing the index j. The momentum g is defined as an exponentially filtered running average of gradients, with weight update determined by current momentum: g(t + 1) = βg(t) + (1 -β)∂ w(t) L 8) w(t + 1) = w(t) -ηg(t + 1). (9) This formulation is equivalent to one in which the update ∆w(t) = w(t + 1) -w(t) includes a portion of the previous update: ∆w(t) = -α∂ w(t) L + β∆w(t -1), with α = η(1 -β). Paralleling the analysis in Section 3.1, we write the state of the momentum optimizer as [w, g] and use Equations 8 and 9 to obtain the update rule: w(t + 1) g(t + 1) = 1 -ηβ 0 β w(t) g(t) + -η(1 -β) (1 -β) ∂ w(t) L. The transition matrix has eigenvectors [1, 0] with eigenvalue 1, and [1, 1-β ηβ ] with eigenvalue β. Now use this eigenbasis to define a reparameterization: w g = 1 1 0 1-β ηβ ω slow ω fast . ( ) Substitution into Equation 10 yields the reparameterized update rule: ω slow (t + 1) ω fast (t + 1) = 1 0 0 β ω slow (t) ω fast (t) - η -ηβ ∂ wt L. ( ) Thus we recover the fast-weight optimizer, with decay γ fast = β and learning rates α slow = η and α fast = -ηβ. The negative fast learning rate is perhaps surprising but can be understood as follows: When ε fast < 0, the subweights learn in opposite directions, with the latent knowledge in ω slow overshooting the observable knowledge in w = ω slow + ω fast . As ω fast decays toward 0, w catches up to ω slow , so that the model appears to continue learning from past input, just as it would with momentum. This analysis highlights the contrasting rationales of these two methods: Learning at multiple timescales is motivated by an expectation of positive autocorrelation in the environment, whereas momentum is effective at smoothing out negative autocorrelation in the gradient signal. 

4. BAYESIAN MULTISCALE OPTIMIZER

We turn now to a normative analysis of learning at multiple timescales, based on Bayesian inference over 1/f noise. The Bayesian model introduced here assumes that the latent parameters θ governing the observed data in some learning task vary over time according to 1/f noise. When the statistical model h(x, θ) is linear in θ, exact Bayesian inference is possible with a KF that maintains a posterior over an expanded representation of θ. When the model is nonlinear, approximate Bayesian inference is achieved by an EKF that uses a linear approximation of h. We then show that a variational approximation of the KF or EKF, in which the posterior covariance matrix is constrained to be diagonal, yields an extension of the multiscale optimizer that adapts its learning rates online by tracking uncertainty. 4.1 GENERATIVE MODEL FOR 1/f NOISE Let z i (t) be an Ornstein-Uhlenbeck process (i.e., diffusion with decay), with timescale or inverse decay rate τ i and diffusion rate σ 2 i , defined by the following stochastic differential equation: dz i = -τ -1 i z dt + σ i dW. Here W (t) is a standard Wiener process (Brownian motion). As a Gaussian process, z i has kernel E[z i (t)z i (t + s)] ∝ e -|s|/τi , implying exponentially decaying autocorrelations. However, a superposition of such processes at different timescales can have qualitatively different properties (Eliazar & Klafter, 2009) . In particular, consider ξ(t) = n i=1 z i (t), where τ i = ν i and σ i = ν -i/2 for a chosen ν > 1, and n is an integer such that τ n is very large. We show in Appendix B that ξ has power-law (i.e., long range) autocorrelations, E[ξ(t)ξ(t+s)] ∝ |s| -1 for s τ n , and accordingly a power spectrum that is well-approximated by 1/f for frequencies f τ -1 n . Moreover, m independent copies of this process constitute m-dimensional 1/f noise, due to the rotational invariance of multidimensional Ornstein-Uhlenbeck processes. This construction affords a flexible generative model of nonstationarity in a variety of online learning domains, by applying it to the latent parameters governing the relationships among observable variables. Assume we receive observations x(t), y(t) that we wish to model with a statistical model h that is parameterized by θ ∈ R m : y(t) = h(x(t), θ(t)). ( ) For example, h may be a NN with weights θ, input x, and target output y. The generative side of our Bayesian model posits latent variables z i (i = 1, . . . , n) such that each z i is an Ornstein-Uhlenbeck process in R m with timescale τ i , and these processes sum to determine the original parameters: θ(t) = n i=1 z i (t). ( ) These assumptions imply that θ follows a 1/f process, and they entail an expanded state representation, Z = (z 1 , . . . , z n ) ∈ R nm , that enables efficient inference as described in Section 4.2.

4.2. INFERENCE

OVER 1/f NOISE VIA EXTENDED KALMAN FILTER We consider Bayesian methods that adopt the construction in Section 4.1 as a generative model to account for nonstationarity. Equations 13 and 14 describe a linear dynamic system with state Z = (z 1 , . . . , z n )∈ R n . If ξ is directly observed at discrete intervals, then optimal Bayesian online prediction of each ξ(t) based on all preceding observations can be implemented by a KF over Z (Kording et al., 2007) (see Appendix D). We extend this approach to arbitrary statistical models with nonstationarity in their latent parameters, as in Equations 15 and 16. When h is linear in θ (and hence in Z), such as in the regression task and 1-layer perceptron model in Section 5.1, exact inference is possible with a standard KF (Appendix D). For a general h, such as a multilayer NN, we use an EKF. The EKF makes a local linear approximation of h based on its Jacobian, the matrix of gradients of predictions ŷ with respect to θ (Appendix E). We use Ollivier's (2018) generalization of the EKF that replaces Gaussian observation noise with any exponential family p(y| ŷ), which is better suited for modeling discrete outcomes such as the classification tasks of Sections 5.2 and 5.3.

4.3. VARIATIONAL APPROXIMATION

Finally, we derive a variational approximation of the EKF that extends the multiscale optimizer and affords efficient implementation in large NNs (Appendix F). As is standard, the EKF maintains an iterative prior over the latent state based on all previous observations: p (Z(t)|x <t , y <t ) ∼ N (ω(t), S(t)). ( ) The mean, ω(t), is the vector of current subweight estimates in the network, while S(t) captures their joint uncertainty and hence determines updates (as a preconditioner on the gradient). We use variational inference to approximate the distribution in Equation 17 by one in which S(t) ≈ S(t), where S(t) is constrained to be diagonal, written as S(t) = diag(s 2 (t)). This reduces the complexity from O(m 2 n 2 ) to O(mnk) (the size of the Jacobian, where k is the size of the output layer). The simplest case is a KF that tracks a single 1/f variable, with no inputs or latent variables. That is, y(t) = ξ(t) ∈ R 1 as in Equation 14. Appendix F.1 derives the variational update rule as ω i (t + 1) = e -1/τi ω i (t) + α i (t) (y i (t) -ŷi (t)) , where y i (t) -ŷi (t) = -∂ w L (i.e., square loss), and the learning rates are given by α i (t) = e -1/τi s 2 i (t) i s 2 i (t) . ( ) For the EKF with a general nonlinear model h(x, θ), a slight extension of the variational approximation, derived in Appendix F.3, provides the following update: ω ij (t + 1) = e -1/τi ω ij (t) -e -1/τi s 2 ij (t)∂ wj (t) L. ( ) Here diag(s 2 ) is the diagonal variational approximation of the posterior variance after observing y(t), and L is the negative loglikelihood in the EKF's Gaussian approximate output distribution. Importantly, the update rule for the variance uses the diagonal of the precision matrix but can be calculated without matrix inversion, which is relevant to scaling up to large networks. Thus the variational method amounts to decomposing every weight in the network as a sum of subweights, w j = i ω ij , that learn independently according to their individual gradients, with decay rates τ i and learning rates coming from S(t). This is a special case within the family of multiscale optimizers, with additional machinery to adapt the learning rates based on current uncertainty.

5.1. REGRESSION TASK

As a simple demonstration, we created an online linear regression task with 10 features (including a bias term), in which the true weights β varied over time according to 1/f noise using Equation 16. The outcome was generated as y = x β (no noise term was needed because β is inherently noisy). The corresponding predictive model is a one-layer perceptron, which we write as ŷ = x w to distinguish weight estimates (w) from true parameters (β). We model the data using the perceptron and compare methods for optimization, using square loss, L = 1 2 (y -ŷ) 2 . We tested two baseline training methods, representing common heuristi practices with nonstationary data (Ditzler et al., 2015; Parisi et al., 2019) . First, we tested a batch learning method that uses a fixed memory horizon H. To produce a prediction on step t, the batch learner fits the perceptron to trials t-H through t-1. Figure 3A shows performance is U-shaped: accuracy suffers with short horizons because of sampling error, but it also suffers from longer horizons because older observations are less valid. Second, we tested SGD, in which the weights are updated once after each step t, based on x(t) and y(t). Figure 3B shows performance is best with an intermediate learning rate, which roughly corresponds to assuming the environment changes on a single characteristic timescale (see Appendix C). As applied to the perceptron, the Bayesian 1/f model described in Section 4.2 decomposes the weight for each feature j into subweights, w j = i ω ij , and tracks the ω ij jointly with a KF (generalizing Dayan & Kakade, 2000) . The subweights combine to predict the outcome on each step: every trial), this exact Bayesian solution explains 54.0% more variance in the outcome than the best batch learner, and 25.7% more than the best parameterization of SGD (Figure 3C ). Moreover, the variational model that constrains the KF covariance matrix to be diagonal (see Appendix F.2) performs nearly (96.9%) as well as the full Bayesian model. ŷ(t) = ij x i (t)ω ij (t). Finally, we tested a discounting method, similar to windowed batch optimization except all past trials were used for training, discounted by a function of lag. Because the 1/f environment has powerlaw correlations, we weighted each observation t -k by k -a and optimized a. This method also significantly outperforms windowed batch and SGD, showing that accounting for an environment's autocorrelation function can achieve much of the advantage of the Bayesian approach. Nevertheless, the full KF and variational methods still outperform discounting, by 5.5% and 2.2% respectively.

5.2. LINEAR CLASSIFICATION TASK

Next, we investigated an online 10-way classification task with 10 features (including a bias term). The data were generated by first sampling the class, y(t) ∼ softmax(e(t)), and then sampling the feature vector, x(t)|y(t) ∼ N (µ y(t) , I). The logits e j and the feature-class means µ ij were independently sampled from 1/f processes (using Equation 16), so that there was nonstationarity in both p(y) and p(x|y). Scaling of e and µ was chosen to equate to maximum possible performance based on perfect knowledge of either one alone (both yielding average loss L ≈ 1). For the predictive model, we used a one-layer perceptron with a softmax output layer, ŷ = softmax(x W ), where W is a matrix of learnable feature-class weights. We assumed cross-entropy loss, L(y, ŷ) = -log ŷy . The batch method trained the network on trials t -H through t -1 until convergence before predicting y(t). Weight decay was included for regularization, optimized for each value of H. Figure 4A shows a U-shaped pattern of performance, reflecting the tradeoff between sampling error and stale data. We also used weight decay with SGD, optimized for each learning rate. Figure 4B again shows a U-shaped pattern of performance. The variational EKF for this task and model is derived in Appendix F.3. We applied 2 regularization to the prior on each time step, on par with the SGD and batch optimization methods. Figure 4C shows the variational method outperforms the other two.

5.3. MNIST CLASSIFICATION

Finally, we tested our methods on classifying a stream of handwritten MNIST digits (LeCun et al., 2010) . We created a nonstationary online learning task by sampling an example from the 10-way MNIST training set on each time step according to class logits that followed 1/f noise (Figure 5A ). For a predictive model, we used a convolutional neural network (CNN) with two convolution layers followed by two dense layers, with 824458 parameters. This experiment provides a more stringent test of the present methods in several ways. First, it tests whether the EKF's linear approximation and the variational diagonal-variance approximation perform well on a multilayer NN, and whether the algorithm is efficient with a moderately large number of parameters. Second, the predictive model the optimizer is doing inference over is unrelated to how the images and labels were generated. Third, the sampling procedure for 1/f noise did not accord with the additive generative process The variational EKF was compared to SGD with momentum, in both 1/f and standard iid environments. Hyperparameters for both methods (noise variance for EKF, learning and momentum rates for SGD) were optimized separately for the two environments. Because the models learn quickly, we evaluated them (within each replication) on a sequence comprising only a random subset of the MNIST training set. Performance was measured by top-1 error rate. This should be interpreted as generalization (i.e., test) performance, because each item was observed only once. The variational EKF outperforms SGD by 1.4% in the iid environment, because its tracking of uncertainty enables more effective gradient steps. However, its advantage jumps to 2.4% in the 1/f environment, showing once again that it is better able to leverage dynamics at multiple scales.

6. CONCLUSIONS

Our analytic and simulation results demonstrate how online learning performance in nonstationary environments can be improved by incorporating a model of temporal structure. The Bayesian 1/f model amounts to distributing knowledge across multiple timescales, and the variational EKF enables approximate implementation in a neural network using subweights with different learning and decay rates. The variational EKF extends the multiscale optimizer, which is closely related to previous models in both neuroscience and ML, and in some cases is equivalent to them despite being simpler in having no coupling between timescales. We have implemented the variational EKF optimizer in JAX in a format compatible with Optax. In the MNIST simulations of Section 5.3, we find our optimizer code (with 8 timescales) is actually 1.6% faster than Optax's off-the-shelf SGD, in compute time per example. Note also that the multiplexing of subweights is not expensive relative to current optimizers (e.g., Adam; Kingma & Ba, 2015) , which also store multiple variables for each weight. In sum, the multiscale optimizer and variational EKF enjoy a combination of normative, heuristic, and biological justification, good performance, and computational efficiency. Our ongoing work aims to extend the theory in several ways. Chang et al. (2022) compare the present variational method to the fully-decoupled EKF of Puskorius & Feldkamp (2003) . Another possible variational method is to assume a block-diagonal matrix that maintains covariance information only between subweights (timescales) within each weight, so that computational complexity still scales linearly with network size. Finally, the present method is not limited to 1/f noise but generalizes to other power laws (e.g., 1/f β ) by appropriate choice of the timescales τ i and noise variances σ i in the generative model (see Appendix B). If, for example, data or theory is available bearing on the power spectrum of the dynamics in a given domain, the optimizer could be tuned accordingly.

A DIAGONALIZATION OF BENNA-FUSI TRANSITION MATRIX

This section provides details on diagonalizing the transition matrix of the update rule for the Benna & Fusi (2016) model, and the corresponding reparameterization in terms of the eigenbasis. First, reparameterize the Benna-Fusi model so that its transition matrix is symmetric, as follows. Recursively define b k = 1 k = 1 b k-1 c k c k-1 1 < k ≤ n (21) and write the state of the Benna-Fusi synapse as φ = ( b k u k ) 1≤k≤n . ( ) The update becomes φ(t + 1) = Γφ(t) + d(t) with Γ k,k = 1 - g k-1 + g k C k (24) Γ k-1,k = Γ k,k-1 = g k-1 C k-1 C k . ( ) Symmetry of Γ implies it has an orthonormal eigenbasis, {ψ 1 , . . . , ψ n }, with corresponding eigenvalues λ 1 , . . . , λ n . Because the scaling of eigenvectors is arbitrary, we can enforce ψ i,1 = 1 for all i. To translate the eigenbasis back to u, define B = diag( √ b) so that φ = Bu and T = B -1 ΓB. It is then easily verified that V •i = B -1 ψ i , is an eigenvector of T with eigenvalue λ i . Therefore T = V ΛV -1 , as claimed in the main text. Note that the choice ψ i,1 = 1 entails V 1,i = 1 (because b 1 = 1). Finally, Ψ = [ψ 1 , . . . , ψ n ] is invertible because it comprises a basis, and therefore so is V = B -1 Ψ. Therefore the reparameterization u → ω = V -1 u is well-defined. B GENERATIVE MODEL FOR 1/f NOISE Consider a single Ornstein-Uhlenbeck (OU) process z(t) described by Equation 13with σ = 1. The covariance function of z is given by E[z(t)z(t + s)] = t -∞ e -(t-t )/τ e -(t+s-t )/τ dt (27) = τ 2 e -s/τ . ( ) Note that this expression decays exponentially with the lag s, yielding short-range autocorrelations. The power spectrum of z, as a function of frequency f , is the Fourier transform of the covariance function: P z (f ) = R τ 2 e -|s|/τ e -2πif s ds (29) = 1 τ -2 + (2πf ) 2 (30) where 2πf is angular frequency. To define a generative model of 1/f noise, we define a continuous mixture model over a family of OU processes (z τ ) 0<τ ≤T for some large T : The power spectrum of ξ is then a mixture over the component spectra P zτ : ξ(t) = T 0 2τ -1 z τ (t)dτ. P ξ (f ) = T 0 4τ -2 P zτ (f )dτ (32) = T 0 4τ -2 τ -2 + (2πf ) 2 dτ (33) = 2 πf tan -1 (2πf T ) which is approximately 1/f for f 1/T , i.e. for all but very low frequencies. We next define a discrete approximation of the continuum mixture model in Equation 31, ξ(t) = i z i (t), where z i has timescale τ i and scaling parameter σ i . To approximate a 1/f spectrum, σ 2 i (τ i+1 -τ i ) -1 should scale as 4τ -2 i , the squared weight density in Equation 31 (because power is additive and proportional to σ 2 ). For example, the τ i could be arithmetically spaced with σ i ∝τ -1 . Instead, we assume geometric spacing, with n components defined by τ i = ν i and σ i = 2ρτ -1/2 i , where ρ 2 is a hyperparameter determining overall steady-state variance. Figure 6 illustrates this construction and exemplifies the accuracy of the discrete approximation. To define a generative model of 1/f noise in R m , we assume an independent copy of this process for each dimension. The resulting joint process is rotationally symmetric, a property inherited from OU processes. That is, any linear combination j c j ξ j is a 1d 1/f process.

C KALMAN FILTER FOR A SINGLE TIMESCALE

Start as in Appendix B with a single OU process z, with σ = 1, and assume it is observed at unit intervals (y t ) t∈N with Gaussian observation noise of variance η 2 . Assume a conjugate iterative prior: z(t)|y <t ∼ N w(t), s 2 (t) . The posterior after each observation is z(t)|y ≤t ∼ N η 2 w(t) + s 2 (t)y(t) s 2 (t) + η 2 , s 2 (t)η 2 s 2 (t) + η 2 . ( ) Evolving the process to time t + 1 amounts to decay by e -1/τ and accumulation of new noise. At each time t ∈ [t, t + 1], variance from the noise appearing at time t (i.e., from dW (t ) in Equation 13) decays by a factor e -2(t+1-t )/τ by time t + 1. Therefore the total accumulated variance is: t+1 t e -2(t+1-t )/τ dt = τ 2 1 -e -2/τ . Therefore the prior for the next observation is z(t + 1)|y ≤t ∼ N e -1/τ η 2 w(t) + s 2 (t)y(t) s 2 (t) + η 2 , e -2/τ s 2 (t)η 2 s 2 (t) + η 2 + τ 2 1 -e -2/τ (39) = N w(t + 1), s 2 (t + 1) . ( ) We thus obtain the recursion w(t + 1) = e -1/τ η 2 w(t) + s 2 (t)y(t) s 2 (t) + η 2 (41) = e -1/τ (w(t) + α(t)(y(t) -w(t))) . This is a temporal-difference learning rule, or gradient descent on squared error 1 2 (y(t) -w(t)), with learning rate α(t) = s 2 (t) s 2 (t)+η 2 . This connection between temporal-difference learning and the Kalman filter is well known (Sutton, 1992) . The steady state for s 2 is given by the solution to s 2 (t) = s 2 (t + 1), a quadratic with one positive root. If s 2 (0) is initialized to this value, then α(t) will be constant. Note that this algorithm (and the extension to 1/f noise in Appendix D) generalizes to irregularly spaced observations, in which case one can derive how the optimal learning rate should vary on each step. D KALMAN FILTER OVER 1/f NOISE Assume we receive a sequence of observations at unit time intervals, y(t) for t ∈ N, and we want to do online prediction under the assumption that y is a 1/f noise process. Then we can make the generative assumption y(t) = ξ(t) with ξ defined as at the end of Appendix B. For simplicity and in contrast to Appendix C, we assume no observation noise, because the shortest timescales (z 1 , etc.) already play this role. Because y(t) is conditionally independent of the history given (and indeed is fully determined by) the joint state Z(t) = (z 1 (t), . . . , z n (t)), it suffices to compute the posterior for the latter. Write the iterative prior for Z as Z(t)|y <t ∼ N (ω(t), S(t)) , implying an optimal (maximum-likelihood or least-squares) prediction of ŷ(t) = i ω i (t). The posterior after observing y(t) is the intersection of the prior with the hyperplane i z i (t) = y(t): Z(t)|y ≤t ∼ N ω(t) + S(t)1 1 S(t)1 (y(t) -ŷ(t)), P S(t) -1 P + 11 -1 P . Here, 1 is the vector with all elements equal to 1, and P = I -1 n 11 is the orthogonal projector. The prior for the next time step is then obtained by applying decay and adding variance from the noise accumulated over the intervening interval. Define D as the diagonal matrix of decay factors, e -1/τi , and N as the diagonal matrix of added noise 2ρ 2 (1 -e -2/τi ), obtained by multiplying the RHS of Equation 38 by σ 2 i = 4ρ 2 τ -1 i (from Appendix B). Then we have the update equations for the exact Bayesian model: ω(t + 1) = D ω(t) + S(t)1 1 S(t)1 (y(t) -ŷ(t)) and S(t + 1) = D P S(t) -1 P + 11 -1 P D + N . Next, consider the perceptron in Section 5.1, where y = x θ. The 1/f generative model assumes θ j = i z ij for each j, where i indexes timescales and j indexes features, and z ij has timescale τ i and scaling parameter σ i defined as above. The latent state is described by Z = (z ij ) ij . We treat ij as a single composite index, so that Z is a vector. As above, write the iterative prior as Z(t)|x <t , y <t ∼ N (ω(t), S(t)) . Let X be a multiplexed copy of the input x, so that X ij = x j . Assuming square loss, the optimal prediction for y(t) is ŷ(t) = X(t) ω(t). The posterior after observing y(t) is the intersection of the prior with the hyperplane X(t) Z(t) = y(t): Z(t)|x ≤t , y ≤t ∼ N ω(t) + S(t)X(t) X(t) S(t)X(t) (y(t) -ŷ(t)), P X(t) S(t) -1 P X(t) + X(t)X(t) -1 P X(t) , where P X = I -XX /X X is the projector orthogonal to X. Generalizing the definitions above, let D and N be the diagonal matrices of decay factors and noise accumulation, D ij,ij = e -1/τi and N ij,ij = 2ρ 2 (1 -e -2/τi ). Applying these to Equation 48 to obtain the prior for the next time step yields the update equations for the exact Bayesian model of the regression task: ω(t + 1) = D ω(t) + S(t)X(t) X(t) S(t)X(t) (y(t) -ŷ(t)) S(t + 1) = D P X(t) S -1 (t)P X(t) + X(t)X(t) -1 P X(t) D + N . Equation 49 exemplifies how the variance matrix, S(t), can be thought of as defining a preconditioner of the gradient, X(t)(y(t) -ŷ(t)).

E EXTENDED KALMAN FILTER

Given a nonlinear model h(x, θ), the 1/f EKF posits that the optimal parameters follow 1/f dynamics according to Equation 16, with expanded latent state Z = (z 1 , . . . , z n ) . It is convenient to introduce the expanded model h defined by h(x, Z) = h(x, i z i ). The 1/f EKF maintains an iterative prior over Z as in Equation 47 and updates that prior by linearizing h about Z = ω(t): h(x(t), Z) ≈ h(x(t), ω(t)) + J h(Z -ω(t)). Here, J h = ∂ ŷ ∂Z is the Jacobian matrix of h, evaluated at Z = ω(t). Note that J h is just n copies of J h . Following Ollivier (2018) , we assume the observation y is governed by some exponential family P (y|η( ŷ)), with vector of sufficient statistics T (y). The model's output ŷ = h(x, ω) is taken to encode the predicted mean parameter of that family: ŷ = E y∼P (•|η( ŷ)) [T (y)] (this can be read as a definition of the mapping ŷ → η( ŷ)). It then approximates the conditional distribution of the sufficient statistics as a Gaussian, p(T (y)| ŷ) ≈ N ( ŷ, R ŷ ) , where R ŷ = Var (T (y)| ŷ) is the conditional variance. For example, when h is a classification model as in Section 5.2 or 5.3, the output of the network is a vector ŷ of class probabilities, and the sufficient statistics T (y) are a one-hot vector. For numerical stability, we exclude the final element of ŷ and T (y), which are determined by the other elements. The conditional outcome variance is given by [R ŷ ] i,j = ŷi (1 -ŷi ) i = j -ŷ i ŷj i = j. ( ) Under the approximations of Equations 51 and 52, the posterior is given by the standard KF formula: Z(t)|x ≤t , y ≤t ∼ N ω(t) + S(t)J h J hS(t)J h + R ŷ(t) -1 (T (y(t)) -ŷ(t)), S(t) -S(t)J h J hS(t)J h + R ŷ(t) -1 J hS(t) . Applying decay (D) and accumulated noise (N ) as in Appendix D to obtain the prior for the next time step yields the update equations for the 1/f EKF: ω(t + 1) = D ω(t) + S(t)J h J hS(t)J h + R ŷ(t) -1 (T (y(t)) -ŷ(t)) S(t + 1) = D S(t) -S(t)J h J hS(t)J h + R ŷ(t) -1 J hS(t) D + N . ( ) Although we apply the EKF to feedforward NNs in this paper, we note that the approach naturally generalizes to other probabilistic causal models relating the observed variables. Thus it offers a means to model nonstationarity that covers all of the traditional forms of distribution shift, following the causal framework of Schölkopf et al. (2012) . Consider a classification task, with input features x(t) ∈ R m and class labels y(t) ∈ Z k . Under a generative causal model where x depends on y, label shift can be modeled by 1/f noise in the distribution p(y), for example y ∼ softmax ( ) with given by Equation 16. Manifestation shift can be modeled by 1/f noise in the distributions p(x|y), for example x(t)|y(t) ∼ N µ y(t) , Σ with µ y given by Equation 16for each y. Likewise, under a discriminative causal model where y depends on x, covariate shift can be modeled by 1/f noise in p(x), and concept shift can be modeled by 1/f noise in p(y|x).

F VARIATIONAL APPROXIMATION

This section derives variational approximations of the KF and EKF models, with S(t) ≈ S(t) where S(t) := diag(s 2 (t)). Thus the resulting algorithms need only track the individual variance terms in s 2 (t) rather than the full covariance matrix. Moreover, the update equations (63, 64, 68, 69, 74, 77) all avoid matrix inversion-even though this might initially seem necessary from Equation 57-a property that may be relevant for efficient scaling.

F.1 UNCUED INFERENCE

We begin with the simple Bayesian 1/f model in Equations 45 and 46, where there are no predictors and the model merely tracks an observable y(t). Given an arbitrary Gaussian distribution, the variational approximation (in the sense of minimizing Kullback-Leibler divergence) by another Gaussian with diagonal covariance matrix is obtained by taking the diagonal of the original distribution's precision matrix. Thus in the present case we have s -2 i (t + 1) = S -1 (t + 1) i,i where S(t+1) is given by Equation 46with S -1 (t) replaced by S-1 (t) (the inductive assumption). To calculate S -1 (t + 1) under this assumption, we first observe the identity P diag s -2 P + 11 -1 P = diag s 2 - s 2 s 2 i s 2 i . Therefore Equation 46becomes S(t + 1) = D S(t)D + N - (Ds 2 (t))(Ds 2 (t)) i s 2 i (t) := diag(a) - bb c where diag(a) = D S(t)D + N , b = Ds 2 (t), and c = i s 2 i (t). We next use the identity diag (a) - bb c -1 = diag 1 √ a   I + a -1/2 • b a -1/2 • b c - b 2 i ai   diag 1 √ a , implying the diagonal elements are diag(a) - bb c -1 i,i = 1 a i   1 + b 2 i ai c -i b 2 i a i   . Combining Equations 57, 60, and 62 gives the final form of the variational update: s 2 i (t + 1) = s 2 i (t)e -1/τi + 4ρ 2 sinh 1 τi 2 Ω s 2 i (t) + 4ρ 2 e 1/τi sinh 1 τi Ω + s 4 i (t) with Ω = i 4ρ 2 s 2 i (t) sinh 1 τi e -1/τi s 2 i (t) + 4ρ 2 sinh 1 τi . ( ) This update of the variance converges exponentially to a unique fixed point. Numerical simulations confirm that, in this limit, s 2 i is larger for smaller τ i , meaning faster learning rates for shorter timescales. If the prior variances are initialized at the fixed point then they are constant throughout learning. By substituting the diagonal matrix S(t) for S(t), the update for the mean in Equation 45 simplifies to ω i (t + 1) = e -1/τi ω i (t) -α i (t) (ŷ(t) -y(t)) (65) where ŷ(t) -y(t) is the loss gradient (assuming square loss), and the learning rates are given by α i (t) = e -1/τi s 2 i (t) i s 2 i (t) . That is, the subweights learn independently according to their gradients, with different decay rates and learning rates. Thus we have recovered an extension of the multiscale optimizer, with an additional mechanism that adapts the learning rates on each time step (via s).

F.2 KALMAN FILTER

To derive the variational KF for the regression task, we apply the analysis of Section F.1 to the KF update in Equations 49 and 50. Paralleling the derivation of Equation 59, the variance update in Equation 50 (i.e., before applying the diagonal variational approximation) can be written as S(t + 1) = D S(t)D + N - (D(x(t) • s 2 (t)))(D(x(t) • s 2 (t))) ij x 2 ij (t)s 2 ij (t) . Paralleling the derivation of Equations 63 and 64, the variational update comes out to be s 2 ij (t + 1) = s 2 ij (t)e -1/τi + 4ρ 2 sinh 1 τi 2 Ω s 2 ij (t) + 4ρ 2 e 1/τi sinh 1 τi Ω + x 2 j (t)s 4 ij (t) with Ω = ij 4ρ 2 x 2 j (t)s 2 ij (t) sinh 1 τi e -1/τi s 2 ij (t) + 4ρ 2 sinh 1 τi . By substituting the diagonal matrix S(t) for S(t), the mean update in Equation 49 simplifies to ω ij (t + 1) = e -1/τi ω ij (t) -α ij x j (t) (ŷ(t) -y(t)) ) where x j (t) (ŷ(t) -y(t)) is the loss gradient for ω ij , and the learning rates are given by α ij = e -1/τi s 2 ij (t) i j x 2 j (t)s 2 i j (t) . Thus we have recovered an extension of the multiscale optimizer, with an additional mechanism that adapts the learning rates on each time step (via x and s).

F.3 EXTENDED KALMAN FILTER

To derive a closed-form variational approximation for the general EKF, such as for the classification models in Sections 5.2 and 5.3, it turns out that we need to apply the variational approximation to the posterior in Equation 54, rather than to the iterative prior in Equation 56. Using Woodbury's identity, the posterior variance can be rewritten as S(t) -S(t)J h J hS(t)J h + R ŷ(t) -1 J hS(t) = (J h R -1 ŷ(t) J h + S -1 (t)) -1 . ( ) The form on the RHS is convenient because it is in terms of precision, allowing us to read off the diagonal entries directly. That is, the variational approximation for the posterior variance is diag(s 2 (t)), with s 2 ij (t) = J h R -1 ŷ(t) J h ij,ij + s -2 ij (t) -1 . ( ) Here was have used the inductive assumption S(t) ≈ diag(s 2 (t)). Applying the transition from posterior on step t to prior on step t + 1, we obtain the variance update for the variational model, replacing Equation 56: s 2 ij (t + 1) = D 2 ij,ij s 2 ij (t) + N ij,ij . Woodbury's identity also enables the mean update from Equation 55 to be rewritten, as ω(t + 1) = D J h R -1 ŷ(t) J h + S -1 (t) -1 S -1 (t)ω(t) + J h R -1 ŷ(t) T (y(t)) -ŷ(t) + J hω(t) . Substituting the gradient of the EKF's approximate likelihood (denoted L) from Equation 52, ∂ ω(t) L = J h R -1 ŷ(t) ( ŷ(t) -T (y(t))), yields ω(t + 1) = D ω(t) -J h R -1 ŷ(t) J h + S -1 (t) -1 ∂ ω(t) L . (77) The preconditioner on the gradient here is the posterior variance (see Equation 72), which we have approximated as diag(s 2 (t)). Although not necessarily entailed by the variational approximation, we can consider applying the same approximation in updating the mean. This yields ω ij (t + 1) = e -1/τi ω ij (t) -e -1/τi s 2 ij (t)∂ wj (t) L, which once again extends the multiscale optimizer by adapting its learning rate to current uncertainty.

G IMPLEMENTATION DETAILS

Latent parameters for the synthetic tasks in Sections 5.1 and 5.2 were sampled using the generative model in Appendix B. That is, the data-generating process matched the generative assumptions of the Bayesian model in both of these cases. We used 20 timescales, geometrically spaced from τ 1 = 1 to τ 20 = 1000, as illustrated in Figure 6A . Each component OU process was run for 10τ i burn-in steps to ensure stationarity. The regression task was run for 10k trials, and the linear classification task for 1000 trials. The 1/f power spectrum was confirmed with a log-log plot (see Figure 6B ). In the regression task of Section 5.1, the first feature was a constant bias term, x 1 ≡ 1. The other 9 features were sampled independently as N (0, I) on each time step. For the linear classification task of Section 5.2, class logits were sampled from 1/f processes and then multiplied by 0.886 before entering into softmax to determine class probabilities. Featureclass means were fixed at 1 for the first feature (i.e., bias term) and were sampled from mutually independent 1/f processes for features 2-10, multiplied by 0.224. These scaling factors were chosen so that perfect knowledge of either the prior probabilities or the feature-class means on every trial would yield ideal-observer performance of L ≈ 1. Perfect knowledge of both would yield L ≈ 0.35. These were merely guidelines for equating prior and likelihood information, as perfect knowledge of either source of information is not possible even with an optimal model of the dynamics. In both Sections 5.1 and 5.2, all Bayesian and variational models assumed 10 timescales, geometrically spaced from τ 1 = 2 to τ 10 = 800. This deliberate deviation from the data-generating process (see the 20 timescales listed above) provided a mild test of robustness, specifically the hypothesis that it is the aggregate 1/f character of the environment and of the model that matters, not the choice of component timescales used to approximate that character. For the MNIST classification task in Section 5.3, the class on each time step was sampled by softmax applied to class logits that varied across steps according to 1/f noise. The item on that time step was than sampled without replacement from all members of that class in the MNIST training set. The logit sequences were generated as follows. First, for each class, we sampled a Standard Gaussian random vector of length equal to the total number of time steps (10k). Then we applied a discrete Fourier transform, multiplied the result by f -1/2 , and finally applied the inverse Fourier transform. Thus the resulting sequence had 1/f power spectrum. For the iid environment, the logits were sampled as white noise (constant power spectrum), by sampling a Standard Gaussian vector as above and using it unaltered.



Figure1: Toy illustration of fast weights. A single weight w (blue) with constant input (x ≡ 1) predicts a target signal T (black) with square loss L = 1 2 (T -w) 2 . The weight is a sum of subweights ω slow (yellow) and ω fast (red). Initial learning is rapid, due to ω fast . Because of decay and the shared error signal, knowledge is gradually transferred to ω slow while ω fast returns to zero. When the task switches (trial 151), ω fast enables rapid adaptation while long-term knowledge is preserved in ω slow . Thus the model recovers quickly on the second reversal (compare blue curve beginning on trials 1 vs 156). The general multiscale optimizer extends this idea to an array of faster and slower weights.

Figure 2: Translation between the model of Benna & Fusi (2016) and the multiscale optimizer works by decomposing the state of the former model into modes, or eigen-patterns of activation that decay independently, which correspond to subweights in the multiscale optimizer. A: All modes for a default Benna-Fusi model with eight variables (n = 8). B: An arbitrary initial state of the model. C: Unique eigen-decomposition of the state in Figure 2B. Implied values of the corresponding multiscale optimizer's subweights can be read off as the values of the curves at k = 1. D: Decay of the individual modes or subweights for 1000 steps (with no external input) at rates given by their eigenvalues. E: Reconstruction of the final state exactly matches the result of iterating the Benna-Fusi update (dotted arrow from Figure2B). F: Decomposition of a unit impulse to u 1 (e.g., loss gradient, shown as grey bar) as a weighted sum of modes. Learning rates for the corresponding subweights, ∆ω i , can be read off as the values of the curves at k = 1 (because V 1i = 1).

Figure 3: Regression task with 1/f dynamics. A: Batch optimization. B: Stochastic gradient descent. C: Model comparison.

Figure 4: Synthetic classification task with 1/f dynamics. Loss is negative log-likelihood. A: Windowed batch optimization. B: Stochastic gradient descent. C: Model comparison.

Figure 5: A: Sample frequencies in blocks of consecutive time steps for all 10 MNIST classes (indicated with a unique color per class). The 1/f sequence exhibits long-range autocorrelations, with nearly the same pattern over blocks of 10 or 100. B: Model performance over a sequence of 10000 examples (1/6 of MNIST training

Figure 6: A: Construction of 1/f noise (black trajectory at top) as a sum of OU processes with different time constants (colored trajectories). Vertical offsets are applied as a visual aid for discriminating the curves. B: The power spectrum of the aggregate trajectory in log-log coordinates. The red line has unit slope.

