PROBABILITY FLOW SOLUTION OF THE FOKKER-PLANCK EQUATION

Abstract

The method of choice for integrating the time-dependent Fokker-Planck equation in high-dimension is to generate samples from the solution via integration of the associated stochastic differential equation. Here, we introduce an alternative scheme based on integrating an ordinary differential equation that describes the flow of probability. Acting as a transport map, this equation deterministically pushes samples from the initial density onto samples from the solution at any later time. Unlike integration of the stochastic dynamics, the method has the advantage of giving direct access to quantities that are challenging to estimate from trajectories alone, such as the probability current, the density itself, and its entropy. The probability flow equation depends on the gradient of the logarithm of the solution (its "score"), and so is a-priori unknown. To resolve this dependence, we model the score with a deep neural network that is learned on-the-fly by propagating a set of samples according to the instantaneous probability current. We consider several high-dimensional examples from the physics of interacting particle systems to highlight the efficiency and precision of the approach; we find that the method accurately matches analytical solutions computed by hand and moments computed via Monte-Carlo.

1. INTRODUCTION

The time evolution of many dynamical processes occurring in the natural sciences, engineering, economics, and statistics are naturally described in the language of stochastic differential equations (SDE) (Gardiner, 2009; Oksendal, 2003; Evans, 2012) . Typically, one is interested in the probability density function (PDF) of these processes, which describes the probability that the system will occupy a given state at a given time. The density can be obtained as the solution to a Fokker-Planck equation (FPE) , which can generically be written as (Risken, 1996; Bass, 2011)  ∂ t ρ * t (x) = -∇ • (b t (x)ρ * t (x) -D t (x)∇ρ * t (x)) , x ∈ Ω ⊆ R d , where ρ * t (x) ∈ R ≥0 denotes the value of the density at time t, b t (x) ∈ R d is a vector field known as the drift, and D t (x) ∈ R d×d is a positive-semidefinite tensor known as the diffusion matrix. (FPE) must be solved for t ≥ 0 from some initial condition ρ * t=0 (x) = ρ 0 (x), but in all but the simplest cases, the solution is not available analytically and can only be approximated via numerical integration. High-dimensionality. For many systems of interest -such as interacting particle systems in statistical physics (Chandler, 1987; Spohn, 2012) , stochastic control systems (Kushner et al., 2001) , and models in mathematical finance (Oksendal, 2003) -the dimensionality d can be very large. This renders standard numerical methods for partial differential equations inapplicable, which become infeasible for d as small as five or six due to an exponential scaling of the computational complexity with d. The standard solution to this problem is a Monte-Carlo approach, whereby the SDE associated with (FPE) dx t = b t (x t )dt + ∇ • D t (x t )dt + √ 2σ t (x t )dW t , is evolved via numerical integration to obtain a large number n of trajectories (Kloeden & Platen, 1992) . In (1), σ t (x) satisfies σ t (x)σ T t (x) = D t (x) and W t is a standard Brownian motion on R d . Assuming that we can draw samples {x i 0 } n i=1 from the initial PDF ρ 0 , simulation of (1) enables the estimation of expectations via empirical averages Ω ϕ(x)ρ * t (x)dx ≈ 1 n n i=1 ϕ(x i t ), where ϕ : Ω → R is an observable of interest. While widely used, this method only provides samples from ρ * t , and hence other quantities of interest like the value of ρ * t itself or the time-dependent differential entropy of the system H t = -Ω log ρ * t (x)ρ * t (x)dx require sophisticated interpolation methods that typically do not scale well to high-dimension. A transport map approach. Another possibility, building on recent theoretical advances that connect transportation of measures to the Fokker-Planck equation (Jordan et al., 1998) , is to recast (FPE) as the transport equation (Villani, 2009; Santambrogio, 2015 ) ∂ t ρ * t (x) = -∇ • (v * t (x)ρ * t (x)) where we have defined the velocity field v * t (x) = b t (x) -D t (x)∇ log ρ * t (x). This formulation reveals that ρ * t can be viewed as the pushforward of ρ 0 under the flow map X * τ,t (•) of the ordinary differential equation d dt X * τ,t (x) = v * t (X * τ,t (x)), X * τ,τ (x) = x, t, τ ≥ 0. (5) Equation ( 5) is known as the probability flow equation, and its solution has the remarkable property that if x is a sample from ρ 0 , then X * 0,t (x) will be a sample from ρ * t . Viewing X * τ,t : Ω → Ω as a transport map, ρ * t = X * 0,t ♯ρ 0 can be evaluated at any position in Ω via the change of variables formula (Villani, 2009; Santambrogio, 2015 ) ρ * t (x) = ρ 0 (X * t,0 (x)) exp - t 0 ∇ • v * τ (X * t,τ (x))dτ where X * t,0 (x) is obtained by solving (5) backward from some given x. Importantly, access to the PDF as provided by (6) immediately gives the ability to compute quantities such as the probability current or the entropy; by contrast, this capability is absent when directly simulating the SDE. Learning the flow. The simplicity of the probability flow equation ( 5) is somewhat deceptive, because the velocity v * t depends explicitly on the solution ρ * t to the Fokker-Planck equation (FPE). Nevertheless, recent work in generative modeling via score-based diffusion (Song & Ermon, 2020a; b; Song & Kingma, 2021 ) has shown that it is possible to use deep neural networks to estimate v * t , or equivalently the so-called score ∇ log ρ * t of the solution density. Here, we introduce a variant of score-based diffusion modeling in which the score is learned on-the-fly over samples generated by the probability flow equation itself. The method is self-contained and enables us to bypass simulation of the SDE entirely; moreover, we provide both empirical and theoretical evidence that the resulting self-consistent training procedure offers improved performance when compared to training via samples produced from simulation of the SDE. 1.1 CONTRIBUTIONS Our contributions are both theoretical and computational: • We provide a bound on the Kullback-Leibler divergence from the estimate ρ t produced via an approximate velocity field v t to the target ρ * t . This bound motivates our approach, and shows that minimizing the discrepancy between the learned score and the score of the push-forward distribution systematically improves the accuracy of ρ t . • Based on this bound, we introduce two optimization problems that can be used to learn the velocity field (4) in the transport equation (3) so that its solution coincides with that of the Fokker Planck equation (FPE). Due to its similarities with score-based diffusion approaches in generative modeling (SBDM), we call the resulting method score-based transport modeling (SBTM). • We provide specific estimators for quantities that can be computed via SBTM but are not directly available from samples alone, like point-wise evaluation of ρ t itself, the differential entropy, and the probability current. • We test SBTM on several examples involving interacting particles that pairwise repel but are kept close by common attraction to a moving trap. In these systems, the FPE is high-dimensional due to the large number of particles, which vary from 5 to 50 in the examples below. Problems of this type frequently appear in the molecular dynamics of externally driven soft matter systems (Frenkel & Smit, 2001; Spohn, 2012) . We show that our method can be used to accurately compute the entropy production rate of a system, a quantity of interest in the active matter community (Nardini et al., 2017) , as it quantifies the out-of-equilibrium nature of the system's dynamics. 1.2 NOTATION AND ASSUMPTIONS. Throughout, we assume that the stochastic process (1) evolves over a domain Ω ⊆ R d in which it remains at all times t ≥ 0. We assume that the drift vector b t : Ω → R d and the diffusion tensor D t : Ω → R d×d are twice-differentiable and bounded in both x and t, so that the solution to the SDE (1) is well-defined at all times t ≥ 0. The symmetric tensor D t (x) = D T t (x) is assumed to be positive semi-definite for each (t, x), with Cholesky decomposition D t (x) = σ t (x)σ T t (x). We further assume that the initial PDF ρ 0 is three-times differentiable, positive everywhere on Ω, and such that H 0 = -Ω log ρ 0 (x)ρ 0 (x)dx < ∞. This guarantees that ρ * t enjoys the same properties at all times t > 0. Finally, we assume that log ρ * t is K-smooth globally for (t, x) ∈ [0, ∞) × Ω, i.e. ∃K > 0 : ∀(t, x) ∈ [0, ∞) × Ω |∇ log ρ * t (x) -∇ log ρ * t (y)| ≤ K|x -y|. This technical assumption is needed to guarantee global existence and uniqueness of the solution of the probability flow equation. Throughout, we use the shorthand notation ẏt = d dt y t interchangeably for a time-dependent quantity y t .

2. RELATED WORK

Score matching Our approach builds directly on the toolbox of score matching originally developed by Hyvärinen (Hyvärinen, 2005; Hyvarinen, 2007; Hyvärinen, 2007; 2008) and more recently extended in the context of diffusion-based generative modeling (Song & Ermon, 2020a; b; Song et al., 2021; De Bortoli et al., 2021; Dockhorn et al., 2022; Mittal et al., 2021) . These approaches assume access to training samples from the target distribution (e.g., in the form of examples of natural images). Here, we bypass this need and use the probability flow equation to obtain the samples needed to learn an approximation of the score. Lu et al. (2022) recently showed that using the transport equation (TE) with a velocity field learned via SBDM can lead to inaccuracies in the likelihood unless higher-order score terms are well-approximated. Proposition 1 shows that the self-consistent approach used in SBTM solves these issues and ensures a systematic approximation of the target ρ * t . Density estimation and Bayesian inference Our method shares commonalities with transport mapbased approaches (Marzouk et al., 2016) for density estimation and variational inference (Zhang et al., 2019; Blei et al., 2017) such as normalizing flows (Tabak & Vanden-Eijnden, 2010; Tabak & Turner, 2013; Rezende & Mohamed, 2016; Huang et al., 2021; Papamakarios et al., 2021; Kobyzev et al., 2021) . Moreover, because expectations are approximated over a set of samples according to (2), the method also inherits elements of classical "particle-based" approaches for density estimation such as Markov chain Monte Carlo (Robert & Casella, 2004 ) and sequential Monte Carlo (Dai et al., 2020; Del Moral et al., 2006) . Our approach is also reminiscent of a recent line of work in Bayesian inference that aims to combine the strengths of particle methods with those of variational approximations (Dai et al., 2016; Saeedi et al., 2017) . In particular, the method we propose bears some similarity with Stein variational gradient descent (SVGD) (Liu, 2017; Liu & Wang, 2018; 2019) (see also (Lu et al., 2018; Li et al., 2020) ), in that both methods approximate the target distribution via deterministic propagation of a set of samples. The key differences are that (i) our method learns the map used to propagate the samples, while the map in SVGD corresponds to optimization of the kernelized Stein discrepancy, and (ii) the methods have distinct goals, as we are interested in capturing the dynamical evolution of ρ * t rather than sampling at equilibrium. Approaches for solving the FPE Most closely connected to our paper are the works by Maoutsa et al. (2020) and Shen et al. (2022) , who similarly propose to bypass the SDE through use of the probability flow equation, building on earlier work by Degond & Mustieles (1990) and Russo (1990) . The critical differences between Maoutsa et al. (2020) and our approach are that they perform estimation over a linear space or a reproducing kernel Hilbert space rather than over the significantly richer class of neural networks, and that they train using the original score matching loss of Hyvärinen (2005) , while the use of neural networks requires the introduction of regularized variants. Because of this, Maoutsa et al. (2020) studies systems of dimension less than or equal to five; in contrast, we study systems with dimensionality as high as 100. Concurrently to our work, Shen et al. (2022) proposed a variational problem similar to SBTM. A key difference is that SBTM is not limited to Fokker-Planck equations that can be viewed as a gradient flow in the Wasserstein metric over some energy (i.e., the drift term in the SDE (1) need not be the gradient of a potential), and that it allows for spatially-dependent and rank-deficient diffusion matrices; moreover, our theoretical results are similar but avoid the use of costly Sobolev norms. Neural-network solutions to PDEs Our approach can also be viewed as an alternative to recent neural network-based methods for the solution of partial differential equations (see e.g. E & Yu (2017); Raissi et al. (2019) ; Han et al. (2018) ; Sirignano & Spiliopoulos (2018) ; Bruna et al. (2022) ). Unlike these existing approaches, our method is tailored to the solution of the Fokker-Planck equation and guarantees that the solution is a valid probability density. Our approach is fundamentally Lagrangian in nature, which has the advantage that it only involves learning quantities locally at the positions of a set of evolving samples; this is naturally conducive to efficient scaling for high-dimensional systems.

3.1. SCORE-BASED TRANSPORT MODELING

Let s t : Ω → R d denote an approximation to the score of the target ∇ log ρ * t , and consider the solution ρ t : Ω → R ≥0 to the transport equation ∂ t ρ t (x) = -∇ • (v t (x)ρ t (x)) with v t (x) = b t (x) -D t (x)s t (x). (TE) Our goal is to develop a variational principle that may be used to adjust s t so that ρ t tracks ρ * t . Our approach is based on the following inequality, whose proof may be found in Appendix B.1: Proposition 1 (Control of the KL divergence). Assume that the conditions listed in Sec. 1.2 hold. Let ρ t denote the solution to the transport equation (TE), and let ρ * t denote the solution to the Fokker-Planck equation (FPE). Assume that ρ t=0 (x) = ρ * t=0 (x) = ρ 0 (x) for all x ∈ Ω. Then d dt D KL (ρ t | ρ * t ) ≤ 1 2 Ω |s t (x) -∇ log ρ t (x)| 2 Dt(x) ρ t (x)dx, where | • | 2 Dt(x) = ⟨•, D t (x)•⟩. In particular, (8) implies that for any T ∈ [0, ∞) we have explicit control on the KL divergence D KL (ρ T | ρ * T ) ≤ 1 2 T 0 Ω |s t (x) -∇ log ρ t (x)| 2 Dt(x) ρ t (x)dxdt. Remarkably, (9) only depends on the approximate ρ t and does not include ρ * t : it states that the accuracy of ρ t as an approximation of ρ * t can be improved by enforcing agreement between s t and ∇ log ρ t . This means that we can optimize (9) without making use of external data from ρ * t , which offers a self-consistent objective function to learn the score s t using (TE) alone. The primary difficulty with this approach is that ρ t must be considered as a functional of s t , since the velocity v t used in (TE) depends on s t . To render the resulting minimization of the right-hand side of (9) practical, we can exploit that (TE) can be solved via the method of characteristics, as summarized in Appendix A. Specifically, if Ẋt (x) = v t (X t (x)) is the probability flow equation associated with the velocity v t , then ρ t = X t ♯ρ 0 . This means that the expectation of any function ϕ(x) over ρ t (x) can be expressed as the expectation of ϕ t (X t (x)) over ρ 0 (x). Observing that the score of the solution to (TE) along trajectories of the probability flow ∇ log ρ t (X t (x)) solves a closed equation leads to the following proposition. Proposition 2 (Score-based transport modeling). Assume that the conditions listed in Sec. 1.2 hold. Define v t (x) = b t (x) -D t (x)s t (x) and consider Ẋt (x) = v t (X t (x)), X 0 (x) = x, Ġt (x) = -[∇v t (X t (x))] T G t (x) -∇∇ • v t (X t (x)), G 0 (x) = ∇ log ρ 0 (x). Then ρ t = X t ♯ρ 0 solves (TE), the equality G t (x) = ∇ log ρ t (X t (x)) holds, and for any T ∈ [0, ∞) implies that D KL (X T ♯ρ 0 | ρ * T ) ≤ 1 2 T 0 Ω |s t (X t (x)) -G t (x)| 2 Dt(Xt(x)) ρ 0 (x) X * t (x) ∼ ρ * t , ∀t ∈ [0, T ]. ( ) Proposition 2 is proven in Appendix B.3. The result also holds with a standard Euclidean norm replacing the diffusion-weighted norm, in which case the minimizer is unique and is given by s * t (x) = ∇ log ρ * t (x). In the special case when the SDE is an Ornstein-Uhlenbeck process, the score and the equations for both X t and G t can be written explicitly; they are studied in Appendix C. In practice, the objective in (SBTM) can be estimated empirically by generating samples from ρ 0 and solving the equations for X t (x) and G t (x) with x ∼ ρ 0 . The constrained minimization problem (SBTM) can then in principle be solved with gradient-based techniques via the adjoint method. The corresponding equations are written in Appendix B.3, but they involve fourth-order spatial derivatives that are computationally expensive to compute via automatic differentiation. Moreover, each gradient step requires solving a system of ordinary differential equations whose dimensionality is equal to the number of samples used to compute expectations times the dimension of (FPE). Instead, we now develop a sequential procedure that avoids these difficulties entirely.

3.2. SEQUENTIAL SCORE-BASED TRANSPORT MODELING

An alternative to the constrained minimization in Proposition 2 is to consider an approach whereby the score s t is obtained independently at each time to ensure that D KL (ρ t | ρ * t ) remains small. This suggests choosing s t to minimize d dt D KL (ρ t | ρ * t ), which admits a simple closed-form bound, as shown in Proposition 1. While this explicit form can be used directly, an application of Stein's identity recovers an implicit objective analogous to Hyvärinen score-matching that is equivalent to minimizing d dt D KL (ρ t | ρ * t ) but obviates the calculation of G t . Expanding the square in ( 8) and applying Ω s t (x) T ∇ log ρ t (x) ρ t (x)dx = -Ω ∇ • s t (x) ρ t (x)dx, we may write d dt D KL (ρ t | ρ * t ) ≤ 1 2 Ω |s t (X t (x))| 2 Dt(Xt(x)) + 2∇ • (D t (X t (x))s t (X t (x))) + |G t (x)| 2 ρ 0 (x)dx. Because ∇ log ρ t (X t (x)) = G t (x) is independent of s t , we may neglect the corresponding square term during the optimization. This leads to a simple and comparatively less expensive way to build the pushforward X * t such that X * t ♯ρ 0 = ρ * t sequentially in time, as stated in the following proposition. Proposition 3 (Sequential SBTM). In the same setting as Proposition 2, let X t (x) solve the first equation in (10) with v t (x) = b t (x) -D t (x)s t (x). Let s t be obtained via min st Ω |s t (X t (x))| 2 Dt(Xt(x)) + 2∇ • (D t (X t (x))s t (X t (x))) ρ 0 (x)dx. (SSBTM) Then, each minimizer s * t of (SSBTM) satisfies D t (x)s * t (x) = D t (x)∇ log ρ * t (x) where ρ * t is the solution to (FPE). Moreover, the map X * t associated to s * t is a transport map from ρ 0 to ρ * t . Proposition 3 is proven in Appendix B.4. Critically, (SSBTM) is no longer a constrained optimization problem. Given the current value of X t at any time t, we can obtain s t via direct minimization of the objective in (SSBTM). Given s t , we may compute the right-hand side of (10) and propagate X t (and possibly G t ) forward in time. The resulting procedure, which alternates between self-consistent score estimation and sample propagation, is presented in Algorithm 1. The output of the method produces a feasible solution for (SBTM) with an a-posteriori bound on the loss obtained via integration. Algorithm 1 Sequential score-based transport modeling. 1: Input: An initial time t 0 ∈ R ≥0 . A set of n samples {x i } n i=1 from ρ t0 . A set of N T timesteps {∆t k } N T -1 k=0 . 2: Initialize sample locations X i t0 = x i for i = 1, . . . , n. 3: for k = 0, . . . , N t -1 do 4: Optimize: s t k = arg min s 1 n n i=1 |s(X i t k )| 2 Dt k (X i t k ) + 2∇ • D t k (X i t k )s(X i t k ) . 5: Propagate samples: X i t k+1 = X i t k + ∆t k b t k (X i t k ) -D t k (X i t k )s t k (X i t k ) . 6: Set t k+1 = t k + ∆t k . 7: Output: A set of n samples {X i t k } n i=1 from ρ t k and the score {s t k (X i t k )} n i=1 for all {t k } N T k=0 . Practical considerations To avoid computation of the divergence ∇ • (D t (X t (x))s t (X t (x) )), which is often costly for neural networks, we can use the denoising score matching loss function introduced by Vincent ( 2011), which we discuss in Appendix B.6. Empirically, we find that the use of either the denoising objective or explicit derivative regularization is necessary for stable training. Why not use the SDE? An alternative to the sequential procedure outlined here would be to generate samples from the target ρ * t via simulation of the associated SDE, and to approximate the score ∇ log ρ * t via minimization of the loss T 0 Ω (|s t (x)| 2 + 2∇ • s t (x))ρ * t (x)dxdt, similar to SBDM. As shown in Appendix B.5 neither D KL (ρ t | ρ * t ) nor D KL (ρ * t | ρ t ) are controlled when using this procedure, where ρ t = X t ♯ρ 0 is the density of the probability flow equation. Empirically, we find in the numerical experiments that this approach is significantly less numerically stable than sequential SBTM. In particular, we could not estimate the entropy using the score learned from the SDE. SBTM vs. Sequential SBTM Given the simplicity of the optimization problem (SSBTM), one may wonder if (SBTM) is useful in practice, or if it is simply a stepping stone to arrive at (SSBTM). The primary difference is that (SBTM) offers global control on the discrepancy between s t and ∇ log ρ t over t ∈ [0, T ] that unavoidably arises in practice due to learning and time-discretization errors. By contrast, because (SSBTM) proceeds sequentially, these errors could accumulate over time in a way that is harder to control. In the numerical examples below, we took the timestep ∆t sufficiently small, and the number of samples n sufficiently large, that we did not observe any accumulation of error. Nevertheless, (SBTM) may allow for more accurate approximation, because the loss is exactly minimized at zero and high-order derivatives of s t must be controlled through the calculation of Ġt .

4. NUMERICAL EXPERIMENTS

In the following, we study two high dimensional examples from the physics of interacting particle systems, where the spatial variable of the Fokker-Planck equation (FPE) can be written as x = x (1) , x (2) , . . . , x (N ) T with each x (i) ∈ R d. Here, d describes a lower-dimensional ambient space, e.g. d = 2, so that the dimensionality of the Fokker-Planck equation d = N d will be high if the number of particles N is even moderate. The still figures shown below do not do full justice to the complexity of the particle dynamics, and we encourage the reader to view the movies available here. With a timestep ∆t = 10 -3 , a horizon T = 10, and a fixed nN d = 10 5 , we find that the sequential SBTM procedure takes around two hours for each simulation on a single NVIDIA RTX8000 GPU.

4.1. HARMONICALLY INTERACTING PARTICLES IN A HARMONIC TRAP

Setup. Here we study a problem that admits a tractable analytical solution for direct comparison. We consider N two-dimensional particles ( d = 2) that repel according to a harmonic interaction but experience harmonic abut experience ttraction towards a moving trap β t ∈ R 2 . The motion of the particles is governed by the stochastic dynamics A system of N = 50 particles in a harmonic trap with a harmonic interaction: (A) A single sample trajectory. The mean of the trap β t is shown with a red star, while past positions of the particles are indicated by a fading trajectory. The noise-free system (right) is too concentrated, and fails to capture the variance of the stochastic dynamics (center). The learned system (left) accurately captures the variance, and in addition generates physically interpretable trajectories for the particles. (B) Quantitative comparison to the analytical solution. The learned solution matches the entropy production rate, score, and covariance well. Movie can be found here. dX (i) t = (β t -X (i) t )dt + α X (i) t - 1 N N j=1 X (j) t dt + √ 2D dW t , i = 1, . . . , N where α ∈ (0, 1) is a fixed coefficient that sets the magnitude of the repulsion. The dynamics ( 13) is an Ornstein-Uhlenbeck process in the extended variable x ∈ R dN with block components x (i) . Assuming a Gaussian initial condition, the solution to the Fokker-Planck equation associated with ( 13) is a Gaussian for all time and hence can be characterized entirely by its mean m t and covariance C t . These can be obtained analytically (Appendices C and D), which facilitates a quantitative comparison to the learned model. The differential entropy S t is given by (see Appendix D) H t = 1 2 dN (log (2π) + 1) + 1 2 log det C t In the experiments, we take β t = a(cos πωt, sin πωt) T with a = 2, ω = 1, D = 0.25, α = 0.5, and N = 50, giving rise to a 100-dimensional Fokker-Planck equation. The particles are initialized from an isotropic Gaussian with mean β 0 (the initial trap position) and variance σ 2 0 = 0.25. Network architecture. We take s t (x) = -∇U θt (x), where the potential U θt (•) is given as a sum of one-and two-particle terms U θt x (1) , . . . , x (N ) = N i=1 U θt,1 x (i) + 1 N N i,j=1 i̸ =j U θt,2 x (i) , x (j) . ( ) To ensure permutation symmetry amongst the particles, we require that U θt,2 (x, y) = U θt,2 (y, x) for each x, y ∈ R d. Modeling at the level of the potential introduces an additional gradient into the loss function, but makes it simple to enforce permutation symmetry; moreover, by writing the potential as a sum of one-and two-particle terms, the dimensionality of the function estimation problem is reduced. As motivation for this choice of architecture, we show in Appendix D.1 that the class of scores representable by ( 15) contains the analytical score for the harmonic problem considered in this section. To obtain the parameters θ t k +∆t k , we perform a warm start and initialize from θ t k , which reduces the number of optimization steps that need to be performed at each iteration. All networks are taken to be multi-layer perceptrons with the swish activation function (Ramachandran et al., 2017) ; further details on the architectures used can be found in Appendix D. Quantitative comparison. For a quantitative comparison between the learned model and the exact solution, we study the empirical covariance Σ over the samples and the entropy production rate dSt dt . Because an analytical solution is available for this system, we may also compute the target ∇ log ρ t (x) = -C -1 t (x -m t ) and measure the goodness of fit via the relative Fisher divergence Ω |s t (x) -∇ log ρ t (x)| 2 ρ(x)dx Ω |∇ log ρ t (x)| 2 ρ(x)dx . ( ) In Equation ( 16), ρ can be taken to be equal to the current particle estimate of ρ t (the training data), or estimated using samples from the stochastic differential equation (the SDE data). Results. The representation of the dynamics (13) in terms of the flow of probability leads to an intuitive deterministic motion that accurately captures the statistics of the underlying stochastic process. Snapshots of particle trajectories from the learned probability flow (5), the SDE (13), and the noise-free equation obtained by setting D = 0 in (13) are shown in Figure 1A . Results for this quantitative comparison are shown in Figure 1B . The learned model accurately predicts the entropy production rate of the system and minimizes the relative metric ( 16) to the order of 10 -2 . The noise-free system incorrectly predicts a constant and negative entropy production rate, while the SDE cannot make a prediction for the entropy production rate. In addition, the learned model accurately predicts the high-dimensional covariance Σ of the system (curves lie directly on top of the analytical result, trace shown for simplicity). The SDE also captures the covariance, but exhibits more fluctuations in the estimate; the noise-free system incorrectly estimates all covariance components as converging to zero.

4.2. SOFT SPHERES IN AN ANHARMONIC TRAP

Setup. Here, we consider a system of N = 5 particles in an anharmonic trap in dimension d = 2 that exhibit soft-sphere repulsion. This system gives rise to a 10-dimensional (FPE), a dimensionality that is significantly too high for standard PDE solvers. The stochastic dynamics is given by dX (i) t = 4B β t -X (i) t |X (i) t -β t | 2 dt + A N r 2 N j=1 X (i) t -X (j) t exp - |X (i) t -X (j) t | 2 2r 2 dt + √ 2D dW t , i = 1, . . . , N, where β t again represents a moving trap, A > 0 sets the strength of the repulsion between the spheres, r sets their size, and B > 0 sets the strength of the trap. We set β(t) = a(cos πωt, sin πωt) T or β(t) = a(cos πωt, 0) T with a = 2, ω = 1, D = 0.25, A = 10, and r = 0.5. We fix B = D/R 2 with R = √ γN r and γ = 5.0. This ensures that the trap scales with the number of particles and that they have sufficient room in the trap to generate a complex dynamics. The circular case converges to a distribution ρ * t = ρ * • Q t that can be described as a fixed distribution ρ * composed with a time-dependent rotation Q t , and hence the entropy production rate should converge to zero. The linear case does not exhibit such convergence, and the entropy production rate should oscillate around zero as the particles are repeatedly pushed and pulled by the trap. We make use of the same network architecture as in Sec. 4.1. Results. Similar to Section 4.1, an example trajectory from the learned system, the SDE (17) and the noise-free system obtained by setting D = 0 in (17) are shown in Figure 2A in the circular case. The learned particle trajectories exhibit an intuitive circular motion with increased disorder relative to the noise-free system that accurately captures the statistics of the stochastic dynamics. Numerical estimates of a single component of the covariance and of the entropy production rate are shown in Figure 2B /C, with all moments shown in Appendix D.2. The learned and SDE systems accurately capture the covariance, while the noise-free system underestimates the covariance in both the linear and the circular case. The prediction of the entropy production rate from Algorithm 1 is reasonable in both cases, exhibiting convergence to zero and oscillation around zero as expected. In the inset, we show the prediction of the entropy production rate when learning on samples from the SDE; the The system learned system agrees well with the SDE, while the noise-free systems under-predicts the moments. (D/E) Prediction of the entropy production rate for a rotating trap (B) and linearly oscillating trap (C). Main figure depicts prediction from SBTM, while the inset depicts the prediction when learning on SDE samples. SBTM captures the temporal evolution of the entropy production rate, while learning on the SDE is initially offset and later divergent. Movies of the circular and linear motion can be viewed here and here, respectively. prediction is initially offset, and later becomes divergent. We found that this behavior was generic when training on the SDE, but never observed it when training on self-consistent samples

5. OUTLOOK AND CONCLUSIONS

Building on the toolbox of score-based diffusion recently developed for generative modeling, we introduced a related approach -score-based transport modeling (SBTM) -that gives an alternative to simulating the corresponding SDE to solve the Fokker-Planck equation. While SBTM is more costly than integration of the SDE because it involves a learning component, it gives access to quantities that are not directly accessible from the samples given by integrating the SDE, such as pointwise evaluation of the PDF, the probability current, or the entropy. Our numerical examples indicate that SBTM is scalable to systems in high dimension where standard numerical techniques for partial differential equations are inapplicable. The method can be viewed as a deterministic Lagrangian integration method for the Fokker-Planck equation, and our results show that its trajectories are more easily interpretable than the corresponding trajectories of the SDE.

A.2 CALCULATION OF THE DIFFERENTIAL ENTROPY

We now consider computation of the differential entropy, and state a similar result. Lemma A.2. Assume that ρ 0 : Ω → R ≥0 is positive everywhere on Ω and C 3 in its argument. Let ρ t : Ω → R ≥0 denote the solution to the Fokker Planck equation (FPE) (or equivalently, to the transport equation (A.1) with s t (x) = ∇ log ρ t (x) in the definition of v t (x)). Then the differential entropy s t = -Ω log ρ t (x) ρ t (x)dx can expressed as H t = - Ω log ρ t (X 0,t (x)) ρ 0 (x)dx = H 0 + t 0 Ω ∇ • v τ (X 0,τ (x))ρ 0 (x)dxdτ (A.9) or H t = H 0 - t 0 Ω s τ (X 0,τ (x)) • v τ (X 0,τ (x))ρ 0 (x)dxdτ (A.10) Proof. We first derive (A.9). Observe that applying (A.5) with ϕ = log ρ t leads to the first equality. The second can then be deduced from (A.4). To derive (A.10), notice that from (A.1), d dt H t = Ω log ρ t (x)∇ • (v t (x)ρ t (x)) dx, = - Ω ∇ log ρ t (x) • v t (x)ρ t (x)dx, = - Ω s t (x) • v t (x)ρ t (x)dx (A.11) Above, we used integration by parts to obtain the second equality and s t = ∇ log ρ t to get the third. Now, using (A.5) with ϕ = s t • v t integrating the result gives (A.10).

A.3 RESAMPLING OF ρ t AT ANY TIME t

If the score s t ≈ ∇ log ρ t is known to sufficient accuracy, ρ t can be resampled at any time t using the dynamics dX τ = s t (X τ )dτ + dW τ . (A.12) In (A.12), τ is an artificial time used for sampling that is distinct from the physical time in (1). For s t = ∇ log ρ t , the equilibrium distribution of (A.12) is exactly ρ t . In practice, s t will be imperfect and will have an error that increases away from the samples used to learn it; as a result, (A.12) should be used near samples for a fixed amount of time to avoid the introduction of additional errors.

B FURTHER DETAILS ON SCORE-BASED TRANSPORT MODELING B.1 BOUNDING THE KL DIVERGENCE

Let us restate Proposition 1 for convenience: Proposition 1 (Control of the KL divergence). Assume that the conditions listed in Sec. 1.2 hold. Let ρ t denote the solution to the transport equation (TE), and let ρ * t denote the solution to the Fokker-Planck equation (FPE). Assume that ρ t=0 (x) = ρ * t=0 (x) = ρ 0 (x) for all x ∈ Ω. Then d dt D KL (ρ t | ρ * t ) ≤ 1 2 Ω |s t (x) -∇ log ρ t (x)| 2 Dt(x) ρ t (x)dx, where | • | 2 Dt(x) = ⟨•, D t (x)•⟩. Proof. By assumption, ρ t solves (TE) and ρ * t solves (FPE). Denote by v t (x) = b t (x) -D t (x)s t (x) and v * t (x) = b t (x) -D t (x)s * t (x) with s * t (x) = ∇ log ρ * t (x). Then, we have d dt D KL (ρ t | ρ * t ) = d dt Ω log ρ t (x) ρ * t (x) ρ t (x)dx, = - Ω ρ t (x) ρ * t (x) ∂ t ρ * t (x)dx + Ω log ρ t (x) ρ * t (x) ∂ t ρ t (x)dx, = - Ω v * t (x) • ∇ ρ t (x) ρ * t (x) ρ * t (x)dx + Ω v t (x) • ∇ log ρ t (x) ρ * t (x) ρ t (x)dx, = - Ω (v * t (x) -v t (x)) • (∇ log ρ t (x) -∇ log ρ * t (x)) ρ t (x)dx, = Ω (s * t (x) -s t (x)) • D t (x) (∇ log ρ t (x) -s * t (x)) ρ t (x)dx. Above, we used integration by parts to obtain the third equality. Now, dropping function arguments for simplicity of notation, we have that |∇ log ρ t -s t | 2 Dt = |∇ log ρ t -s * t + s * t -s t | 2 Dt , = |∇ log ρ t -s * t | 2 Dt + |s * t -s t | 2 Dt + 2(∇ log ρ t -s * t ) • D t (s * t -s t ), ≥ 2(∇ log ρ t -s * t ) • D t (s * t -s t ). (B.1) Hence, we deduce that d dt D KL (ρ t | ρ * t ) ≤ 1 2 Ω |s t (x) -∇ log ρ t (x)| 2 Dt(x) ρ 0 (x)dx. (B.2) B.2 SBTM IN THE EULERIAN FRAME The Eulerian equivalent of Proposition 2 can be stated as: Proposition B.1 (SBTM in the Eulerian frame). Assume that the conditions listed in Sec. 1.2 hold. Fix T ∈ (0, ∞] and consider the optimization problem min {st:t∈[0,T )} T 0 Ω |s t (x) -∇ log ρ t (x)| 2 Dt(x) ρ t (x)dxdt subject to: ∂ t ρ t (x) = -∇ • (v t (x)ρ t (x)) , x ∈ Ω (SBTM2) with v t (x) = b t (x) -D t (x)s t (x). Then every minimizer of (SBTM2) satisfies D t (x)s * t (x) = D t (x)∇ log ρ * t (x) where ρ * t : Ω → R >0 solves (FPE). In words, this proposition states that solving the constrained optimization problem (SBTM2) is equivalent to solving the Fokker-Planck equation (FPE). Proof. The constrained minimization problem (SBTM2) can be handled by considering the extended objective T 0 Ω |s t (x) -∇ log ρ t (x)| 2 Dt(x) ρ t (x) + µ t (x) (∂ t ρ t (x) + ∇ • (v t (x)ρ t (x))) dxdt (B.3) where v t (x) = b t (x)-D t (x)s t (x) and µ t : R d → R ≥0 is a Lagrange multiplier. The Euler-Lagrange equations associated with (B.3) read ∂ t ρ t (x) = -∇ • (v t (x)ρ t (x)) ∂ t µ t (x) = v t (x) T ∇µ t (x) + |s t (x)| 2 Dt(x) -|∇ log ρ t | 2 Dt(x) + 2∇ • [D t (x) (s t (x) -∇ log ρ t (x))] , 0 = µ T (x), 0 = D t (x) (s t (x) -∇ log ρ t (x)) ρ t (x) + 1 2 D t (x)∇µ t (x)ρ t (x) (B.4) Clearly, these equations will be satisfied if s * t (x) = ∇ log ρ * t (x) for all x ∈ Ω, µ * t (x) = 0 for all x, and ρ * t solves (FPE). This solution is also a global minimizer, because it zeroes the value of the objective. Moreover, all global minimizers must satisfy D t (x)s * t (x) = D t (x)∇ log ρ * t (x) (ρ t -almost everywhere), as this is the only way to zero the objective. It is also easy to see that there are no other local minimizers. To check this, we can use the fourth equation to write D t (x)(s t (x) -∇ log ρ t (x)) = 1 2 D t (x)∇µ t (x). Then, |s t (x)| 2 Dt(x) -|∇ log ρ t (x)| 2 Dt(x) = 1 2 (s t (x) + ∇ log ρ t (x)) T D t (x)∇µ t (x). This reduces the first three equations to ∂ t ρ t (x) = -∇ • b t (x)ρ t (x) -D t (x)∇ρ t (x) -1 2 ρ t D t (x)∇µ t (x) ∂ t µ t = b t (x) -D t (x)∇ log ρ t (x) -1 2 D t (x)∇µ t (x) T ∇µ t (x) + ∇ • (D t (x)∇µ t (x)) + 1 2 (s t (x) + ∇ log ρ t (x)) T D t (x)∇µ t (x). µ T (x) = 0. (B.5) Since the equation for µ t is homogeneous in µ t and µ T = 0, we must have µ t = 0 for all t ∈ [0, T ), and the equation for ρ t reduces to (FPE).

B.3 SBTM IN THE LAGRANGIAN FRAME

As stated, Proposition B.1 is not practical, because it is phrased in terms of the density ρ t . The following result demonstrates that the transport map identity ( 6) can be used to re-express Proposition B.1 entirely in terms of known quantities. Proposition 2 (Score-based transport modeling). Assume that the conditions listed in Sec. 1.2 hold. Define v t (x) = b t (x) -D t (x)s t (x) and consider Ẋt (x) = v t (X t (x)), X 0 (x) = x, Ġt (x) = -[∇v t (X t (x))] T G t (x) -∇∇ • v t (X t (x)), G 0 (x) = ∇ log ρ 0 (x). ( ) Then ρ t = X t ♯ρ 0 solves (TE), the equality G t (x) = ∇ log ρ t (X t (x)) holds, and for any T ∈ [0, ∞) D KL (X T ♯ρ 0 | ρ * T ) ≤ 1 2 T 0 Ω |s t (X t (x)) -G t (x)| 2 Dt(Xt(x)) ρ 0 (x)dxdt. ( ) Moreover, if s * t is a minimizer of the constrained optimization problem min s T 0 Ω |s t (X t (x)) -G t (x)| 2 Dt(Xt(x)) ρ 0 (x)dxdt subject to (10) (SBTM) then D t (x)s * t (x) = D t (x)∇ log ρ * t (x) where ρ * t solves the Fokker-Planck equation (FPE). The map X * t associated to any minimizer is a transport map from ρ 0 to ρ * t , i.e. x ∼ ρ 0 implies that X * t (x) ∼ ρ * t , ∀t ∈ [0, T ]. ( ) Proof. Let us first show that G t (x) = ∇ log ρ t (X t (x)) satisfies ( 10) if ρ t = X t ♯ρ 0 , i.e. if ρ t satisfies the transport equation (TE). Since (TE) implies that ∂ t log ρ t (x) + v t (x) • ∇ log ρ t (x) = -∇ • v t (x), (B.6) taking the gradient gives ∂ t ∇ log ρ t (x) + [∇v t (x)] T ∇ log ρ t (x) + ∇∇ log ρ t (x) • v t (x) = -∇∇ • v t (x). (B.7) Therefore G t (x) = ∇ log ρ t (X t (x)) solves d dt G t (x) = ∂ t ∇ log ρ t (X t (x)) + ∇∇ log ρ t (X t (x)) • d dt X t (x), = ∂ t ∇ log ρ t (X t (x)) + ∇∇ log ρ t (X t (x)) • v t (x), = -∇∇ • v t (X t (x)) -[∇v t (X t (x))] T ∇ log ρ t (X t (x)), (B.8) which recovers the equation for G t (x) in (10). Hence, the objective in (SBTM) can also be written as T 0 Ω |s t (X t (x)) -∇ log ρ t (X t (x))| 2 ρ 0 (x)dxdt = T 0 Ω |s t (x) -∇ log ρ t (x)| 2 ρ t (x)dxdt (B.9) where the second equality follows from (A.5) if ρ t (x) satisfies (A.1). Hence, (SBTM) is equivalent to (SBTM2). The bound on D KL (X T ♯ρ 0 | ρ * T ) follows from (9). Adjoint equations. In terms of a practical implementation, the objective in (SBTM2) can be evaluated by generating samples {x i } n i=1 from ρ 0 and solving the equations for X t and G t using the initial conditions X 0 (x i ) = x i and G 0 (x i ) = ∇ log ρ 0 (x i ). Note that evaluating this second initial condition only requires one to know ρ 0 up to a normalization factor. To evaluate the gradient of the objective, we can introduce equations adjoint to those for X t and G t . They read, respectively d dt θ t (x) + [∇v t (X t (x))] T θ t (x) = η t (x) • ∇∇v t (X t (x))G t (x) + η t (x) • ∇∇∇v t (X t (x))G t (x) + 2∇s t (X t (x))(s t (X t (x)) -G t (x)), θ T (x) = 0 d dt η t (x) -∇v t (X t (x))η t (x) = 2(G t (x) -s t (X t (x))), η T (x) = 0. (B.10) In terms of these functions, the gradient of the objective is the gradient with respect to s t (x) (or the parameters in this function when it is modeled by a neural network) of the extended objective: L[s t ] = T 0 Ω |s t (X t (x)) -G t (x)| 2 ρ 0 (x)dxdt + T 0 Ω θ t (x) • Ẋt (x) -v t (X t (x)) ρ 0 (x)dxdt + T 0 Ω η t (x) • Ġt (x) + [∇v t (X t (x))] T G t (x) + ∇∇ • v t (X t (x)) ρ 0 (x)dxdt, (B.11) where v t (x) = b t (x) -D t (x)s t (x).

B.4 SEQUENTIAL SBTM

Let us restate Proposition 3 for convenience: Proposition 3 (Sequential SBTM). In the same setting as Proposition 2, let X t (x) solve the first equation in (10) with v t (x) = b t (x) -D t (x)s t (x). Let s t be obtained via min st Ω |s t (X t (x))| 2 Dt(Xt(x)) + 2∇ • (D t (X t (x))s t (X t (x))) ρ 0 (x)dx. (SSBTM) Then, each minimizer s * t of (SSBTM) satisfies D t (x)s * t (x) = D t (x)∇ log ρ * t (x) where ρ * t is the solution to (FPE). Moreover, the map X * t associated to s * t is a transport map from ρ 0 to ρ * t . Proof. If X t ♯ρ 0 = ρ t , then by definition we have the identity Ω |s t (X t (x))| 2 Dt(Xt(x)) + 2∇ • (D t (X t (x))s t (X t (x))) ρ 0 (x)dx = Ω |s t (x)| 2 Dt(x) + 2∇ • (D t (x)s t (x)) ρ t (x)dx. (B.12) This means that the optimization problem in (SSBTM) is equivalent to min st Ω |s t (x)| 2 Dt(x) + 2∇ • (D t (x)s t (x)) ρ t (x)dx. All minimizers s * t of this optimization problem satisfy D t (x)s * t (x) = D t (x)∇ log ρ t (x). Hence, by (TE), ∂ t ρ t (x) = -∇ • (b t (x)ρ t (x) -D t (x)∇ρ t (x)) (B. 13) which recovers (FPE), so that ρ t (x) = ρ * t (x) solves (FPE).

B.5 LEARNING FROM THE SDE

In this section, we show that learning from the SDE alone -i.e., avoiding the use of self-consistent samples -does not provide a guarantee on the accuracy of ρ t . We have already seen in ( 9) that it is sufficient to control T 0 Ω |s t (x) -∇ log ρ t (x)| 2 Dt ρ * t (x)dxdt to control D KL (ρ T | ρ * T ). The proof of Proposition 1 shows that control on T 0 Ω |s t (x) -∇ log ρ * t (x)| 2 Dt(x) ρ * t (x)dxdt, (B.14) as would be provided by training on samples from the SDE, does not ensure control on D KL (ρ T | ρ * T ). The following proposition shows that control on (B.14) does not guarantee control on D KL (ρ * T | ρ T ) either. An analogous result appeared in Lu et al. (2022) in the context of SBDM for generative modeling; here, we provide a self-contained treatment to motivate the use of the sequential SBTM procedure discussed in the main text. Proposition B.2. Let ρ t : Ω → R >0 solve (TE), and let ρ * t : Ω → R >0 solve (FPE). Then, the following equality holds D KL (ρ * T | ρ T ) = T 0 Ω |s t (x) -∇ log ρ * t (x)| 2 Dt(x) ρ * t (x)dxdt + T 0 Ω (∇ log ρ t (x) -s t (x)) T D t (x) (s t (x) -∇ log ρ * t (x)) ρ * t (x)dxdt. (B.15) Proposition B.2 shows that minimizing the error between s t and ∇ log ρ * t on samples of ρ * t leaves a remainder term, because in general ∇ log ρ t ̸ = s t . The proof shows that we may obtain the simple upper bound D KL (ρ * T | ρ T ) ≤ 1 2 T 0 Ω |s t (x) -∇ log ρ * t (x)| 2 Dt(x) ρ * t (x)dxdt + 1 2 T 0 Ω |s t (x) -∇ log ρ t (x)| 2 Dt(x) ρ * t (x)dxdt. (B.16) However, controlling the above quantity requires enforcing agreement between s t and ∇ log ρ t in addition to s t and ∇ log ρ * t ; this is precisely the idea of SBTM. Proof. By symmetry, we may replace ρ t by ρ * t in the proof of Proposition 1 to find d dt D KL (ρ * t | ρ t ) = (∇ log ρ t (x) -∇ log ρ * t (x)) T D t (x) (s t (x) -∇ log ρ * t (x)) ρ * t (x)dx Adding and subtracting s t (x) to the first term in the inner product and expanding gives d dt D KL (ρ * t | ρ t ) = Ω |s t (x) -∇ log ρ * t (x)| 2 ρ * t (x)dx + Ω (∇ log ρ t (x) -s t (x)) T D t (x) (s t (x) -∇ log ρ * t (x)) ρ * t (x)dx, (B.17) Integrating from 0 to T completes the proof.

B.6 DENOISING LOSS

The following standard trick can be used to avoid computing the divergence of s t (x): Lemma B.3. Given ξ = N (0, I), we have lim α↓0 α -1 E s t (x + αξ) • ξ = ∇ • s t (x), lim α↓0 α -1 E s t (x + ασ t (x)ξ) • σ t (x)ξ = tr (D t (x)∇s t (x)) (B.18) Proof. We have α -1 s t (x + αξ) • ξ = α -1 s t (x) • ξ + (∇s t (x)ξ) • ξ + o(α) (B.19) The expectation of the first term on the right-hand side of this equation is zero; the expectation of the second gives the result in (B.18). Hence, taking the expectation of (B.19) and evaluating the result in the limit as α ↓ 0 gives the first equation in (B.18). The second equation in (B.18) can be proven similarly using σ t (x)σ t (x) T = D t (x). Replacing ∇ • s t (x) in (SSBTM) with the first expression in (B.19) for a fixed α > 0 gives the loss L[s t ] = E ξ Ω |s t (X t (x))| 2 + 2 α s t (X t (x) + αξ) • ξ ρ 0 (x)dx . (B.20) Evaluating the square term at a perturbed data point recovers the denoising loss of Vincent ( 2011) L[s t ] = E ξ Ω s t (X t (x) + αξ) + ξ α 2 ρ 0 (x)dx . (B.21) We can improve the accuracy of the approximation with a "doubling trick" that applies two draws of the noise of opposite sign to reduce the variance. This amounts to replacing the expectations in (B.18) with 1 2 α -1 E s t (x + αξ) • ξ -s t (x -αξ) • ξ , 1 2 α -1 E s t (x + ασ t (x)ξ) • σ t (x)ξ -s t (x -ασ t (x)ξ) • σ t (x)ξ , (B.22) whose limits as α → 0 are ∇ • s t (x) and tr (D t (x)∇s t (x)), respectively. In practice, we observe that this approach always helps stabilize training. Moreover, we observe that use of the denoising loss also stabilizes training, so that it is preferable to full computation of ∇ • s t (x) even when the latter is feasible.

C GAUSSIAN CASE

Here, we consider the case of an Ornstein-Uhlenbeck (OU) process where the score can be written analytically, thereby providing a benchmark for our approach. The example treated in Section 4.1 with details in Appendix D.1 is a special case of such an OU process with additional symmetry arising from permutations of the particles. The SDE reads dX t = -Γ t (X t -b t )dt + √ 2σ t dW t (C.1) where X t ∈ R d , Γ t ∈ R d×d is a time-dependent positive-definite tensor (not necessarily symmetric), b t ∈ R d is a time-dependent vector, and σ t ∈ R d×d is a time-dependent tensor. The Fokker-Planck equation associated with (C.1) is ∂ t ρ * t (x) = -∇ • ((Γ t x -b t )ρ * t (x) -D t ∇ρ * t (x)) (C.2) where D t = σ t σ T t . Assuming that the initial condition is Gaussian, ρ 0 = N(m 0 , C 0 ) with C 0 = C T 0 ∈ R d×d positive-definite, the solution is Gaussian at all times t ≥ 0, ρ * t = N(m t , C t ) with m t and C t = C T t solutions to ṁt = -Γ t (m t -b t ) Ċt = -Γ t C t -C t Γ T t + 2D t (C.3) This implies in particular that ∇ log ρ * t (x) = -C -1 t (x -m t ). (C.4) so that the probability flow equation for X t and the equation for G t written in (SBTM2) read Ẋt (x) = (D t C -1 t -Γ t )X t (x) + Γ t b t -D t C -1 t m t , Ġt (x) = (Γ T t -C -1 t D t )G t (x), (C.5) with initial condition X 0 (x) = x and G 0 (x) = ∇ log ρ 0 (x) = -C -1 0 (x -m 0 ). It is easy to see that with x ∼ ρ 0 = N(m 0 , C 0 ) we have X t (x) ∼ ρ * t = N(m t , C t ) since, from the first equation in (C.5), the mean and variance of X t satisfy (C.3). Similarly, when x ∼ ρ 0 = N(m 0 , C 0 ), G 0 (x) ∼ N (0, C -1 0 ), so that G t (x) ∼ N(0, C -1 t ) because the second equation in (C.5 ) is linear and hence preserves Gaussianity. Moreover, E 0 G t (x) = 0 and B t = B T t = E 0 [G t (x)G T t (x)] satisfies d dt B t = (Γ T t -C -1 t D t )B t + B t (Γ t -D t C -1 t ) (C.6) The solution to this equation is B t = C -1 t since substituting this ansatz into (C.6) gives the equation for C -1 t that we can deduce from (C.3) d dt C -1 t = C -1 t Ċt C -1 t = -C -1 t Γ t -Γ T t C -1 t + 2C -1 t D t C -1 t . (C.7) Note that if Γ t = Γ, b t = b, and D t = D are all time-independent, then lim t→∞ ρ t = N (m ∞ , C ∞ ) with m ∞ = b and C ∞ the solution to the Lyapunov matrix equation ΓC ∞ + C ∞ Γ T = 2D. (C.8) This means that at long times the coefficients at the right-hand sides of (C.5) also settle on constant values. However, X t and G t do not necessarily stop evolving; one situation where they too tend to fix values is when the OU process is in detailed balance, i.e. when Γ = DA for some A = A T ∈ R d×d positive-definite. In that case, the solution to (C.8) is C ∞ = A -1 and it is easy to see that at long times the right hand sides of (C.5) tend to zero. Remark C.1. This last conclusion is actually more generic than for a simple OU process. For any SDE in detailed balance, i.e. that can be written as dX t = -D(X t )∇U (X t )dt + ∇ • D(X t )dt + √ 2σ t (X t )dW t (C.9) where x) dx < ∞, we have that lim t→∞ ρ t (x) = Z -1 e -U (x) , and the corresponding flows X t and G t eventually stop as t → ∞. In this case, ρ t follows gradient descent in W 2 over the energy U : R d → R >0 is a C 2 -potential such that Z = R d e -U ( E[ρ] = R d (U (x) + log ρ(x))ρ(x)dx (C.10) The unique PDF minimizing this energy is Z -1 e -U (x) , and as t → ∞ X t converges towards a transport map between the initial ρ 0 and Z -1 e -U (x) .

D EXPERIMENTAL DETAILS AND ADDITIONAL EXAMPLES

All numerical experiments were performed in jax using the dm-haiku package to implement the networks and the optax package for optimization.

D.1 HARMONICALLY INTERACTING PARTICLES IN A HARMONIC TRAP

Network architecture Both the single-particle energy U θt,1 : R d → R and two-particle interaction energy U θt,2 : R d × R d → R are parameterized as single hidden-layer neural networks with the swish activation function (Ramachandran et al., 2017) and n_hidden = 100 hidden neurons. The hidden layer biases are initialized to zero while the hidden layer weights are initialized from a truncated normal distribution with variance 1/fan_in, following the guidelines recommended in (Ioffe & Szegedy, 2015) . Optimization The Adam (Kingma & Ba, 2017) optimizer is used with an initial learning rate of η = 10 -4 and otherwise default settings. At time t = 0, the analytical relative loss L[s 0 ] = |s 0 (x) -∇ log ρ 0 (x)| 2 ρ 0 (x)dx |∇ log ρ 0 (x)| 2 ρ 0 (x)dx (D.1) is minimized to a value less than 10 -4 using knowledge of the initial condition ρ 0 = N β 0 , σ 2 0 I with σ 0 = 0.25. In (D.1), the expectation with respect to ρ 0 is approximated by an initial set of samples x j = x (1) j , x (2) j , . . . , x (N ) j T with j = 1, . . . , n drawn from ρ 0 . In the experiments, we set n = 100. We set the physical timestep ∆t = 10 -3 and take n_opt_steps = 25 steps of Adam until the norm of the gradient is below gtol = 0.1. Analytical moments First define the mean, second moment, and covariance according to m (i) t = E X (i) t , M (ij) t = E X (i) t X (j) t T , C (ij) t = M (ij) -m (i) m (j) T . It is straightforward to show that the mean and covariance obey the dynamics ṁ(i) t = -(m (i) t -β t ) + α N N k=1 m (i) t -m (k) t , (D.2) Ċ(ij) t = -2(1 -α)C (ij) t + 2DIδ ij - α N N k=1 C (kj) t + C (ik) t (D.3) Because the particles are indistinguishable so long as they are initialized from a distribution that is symmetric with respect to permutations of their labeling, the moments will satisfy the ansatz  m (i) t = m(t), i = 1, . . . , N (D.4) C (ij) t = C d (t)δ ij + C o (t)(1 -δ ij ), i, = β t -m, Ċd = 2(α -1)C d -2 α N (C d + (n -1)C o ) + 2DI, Ċo = 2(α -1)C o -2 α N (C d + (n -1)C o ) . For a given β : R → R d, these equations can be solved analytically in Mathematica as a function of time, giving the mean m t = m(t) ⊗ 1 N ∈ R N d and covariance C t = (C d (t) -C o (t)) ⊗ I N ×N + C o (t) ⊗ 1 N 1 T N ∈ R N d×N d. Because the solution is Gaussian for all t, we then obtain the analytical solution to the Fokker-Planck equation ρ * t = N (m t , C t ) and the corresponding analytical score -∇ log ρ * t (x) = C -1 t (xm t ). Potential structure Here, we show that the potential for this example lies in the class of potentials described by (15). From Equation D.5, we have a characterization of the structure of the covariance matrix C t for the analytical potential U t (x) = 1 2 (xm t ) T C -1 t (xm t ). In particular, C t is block circulant, and hence is block diagonalized by the roots of unity (the block discrete Fourier transform). That is, we may take a "block eigenvector" of the form ω k = I d× dρ k , I d× dρ 2k , . . . , I d× dρ (N -1)k T with ρ = exp(-2πi/N ) for k = 0, . . . N -1. By direct calculation, this block diagonalization leads to two distinct block eigenmatrices, C t = V     C d (t) + (N -1)C o (t) 0 0 . . . 0 0 C d (t) -C o (t) 0 . . . 0 0 0 . . . . . . 0 0 0 0 . . . C d (t) -C o (t)     V -1 where V ∈ R N d×N d denotes the matrix with block columns ω k . The inverse matrix C -1 t then must similarly have only two distinct block eigenmatrices given by (C d (t) + (N -1)C o (t)) -1 and (C d (t) -C o (t)) -1 . By inversion of the block Fourier transform, we then find that C -1 t (ij) = Cd δ ij + Co (1 -δ ij ) for some matrices Cd , Co . Hence, by direct calculation (x -m t ) T C -1 t (x -m t ) = N i,j x (i) -m (i) t T C -1 t (ij) x (j) -m (j) t = N i,j x (i) -m(t) T Cd δ ij + Co (1 -δ ij ) x (j) -m(t) = N i x (i) -m(t) T Cd x (i) -m(t) T + N i̸ =j x (i) -m(t) T Co x (j) -m(t) Above, we may identify the first term in the last line as N i=1 U 1 (x (i) ) and the second term in the last line as 1 N N i̸ =j U 2 (x (i) , x (j) ). Moreover, U 2 (•, •) is symmetric with respect to its arguments. Analytical Entropy For this example, the entropy can be computed analytically and compared directly to the learned numerical estimate. By definition, s t = - R N d log ρ t (x)ρ t (x)dx, = - R N d - N d 2 log(2π) - 1 2 log det C t - 1 2 (x -m t ) T C -1 t (x -m t ) ρ t (x)dx, = N d 2 (log (2π) + 1) + 1 2 log det C t . Additional figures Images of the learned velocity field and potential in comparison to the corresponding analytical solutions can be found in Figures D.1 and D.2, respectively. Further detail can be found in the corresponding captions. We stress that the two-dimensional images represent single-particle slices of the high-dimensional functions.

D.2 SOFT SPHERES IN AN ANHARMONIC TRAP

Network architecture Both potential terms U θt,1 and U θt,2 are modeled as four hidden-layer deep fully connected networks with n_hidden = 32 neurons in each layer. The initialization is identical to Appendix D.2.

Optimization and initialization

The Adam optimizer is used with an initial learning rate of η = 5×10 -3 and otherwise default settings. At time t = 0, the loss (D.1) is minimized to a value less than 10 -4 over n samples x 0,j ∼ N(β 0 , σ 2 0 I) with σ 0 = 0.5 and n = 1000, similar to Appendix D.2. After this initial optimization, 100 steps of the SDE (17) are taken in artificial time τ with fixed physical t = 0 to ensure that no spheres are overlapping at initialization. Past this initial stage, the denoising loss is used with a noise scale σ = 0.025. The loss is minimized by taking n_opt_steps = 25 steps of Adam until the norm of the gradient is below gtol = 0.5. The physical timestep is set to ∆t = 10 -3 . Additional figures A depiction of the one-particle potential, estimated as the negative logarithm of the one-particle PDF obtained via kernel density estimation, can be found in 

D.3 AN ACTIVE SWIMMER

Here, we study an "active swimmer" model that describes the motion of a particle in an anharmonic trap with a preference to travel in a noisy direction. The system is two-dimensional, and is given by the stochastic differential equation for the position x and velocity v dx = -x 3 + v dt, dv = -γvdt + 2γDdW t . (D.6) Despite its low-dimensionality, (D.6) exhibits convergence to a non-equilibrium statistical steady state in which the probability current j t (x) = v t (x)ρ t (x) is non-zero. Setup We set γ = 0.1 and D = 1.0. Because noise only enters the system through the velocity variable v in (D.6), the score can be taken to be one-dimensional. This is equivalent to learning the score only in the range of the rank-deficient diffusion matrix. We parameterize the score directly s t : R 2 → R using a three-hidden layer neural network with n_hidden = 32 neurons per hidden layer.

Optimization and initialization

The network initialization is identical to the previous two experiments. The physical timestep is set to ∆t = 10 -3 . The Adam optimizer is used with an initial learning rate of η = 10 -4 . At time t = 0 the loss (D.1) is minimized to a tolerance of 10 -4 over n = 5000 samples drawn from an initial distribution N(0, σ 2 0 I) with σ 0 = 1. The denoising loss is used with a noise scale σ = 0.05, using n_opt_steps = 25 steps of Adam until the norm of the gradient is below gtol = 0.5. Results Depictions of the sample trajectories {x i (t), v i (t)} n i=1 in phase space are shown in Figure D.6. The trajectories demonstrate that the distribution of the learned samples qualitatively matches the distribution of the SDE samples. The noise-free system grows increasingly and overly compressed with time. The learned velocity field effectively captures a non-zero rotational steady-state current that qualitatively matches the current of the SDE but enjoys more interpretable sample trajectories. A movie of the motion of the samples (x i , v i ) in phase space can be seen here. The movie highlights convergence of the learned solution to a non-zero steady-state probability current that qualitatively matches that of the SDE. By contrast, the noise-free system becomes increasingly concentrated with time, failing to accurately capture the current. Columns denote solution type and rows denote snapshots in time (t = 0, 0.5, 1.5, 6.0, respectively). Similar to the samples presented in Figure D.6, the KDE reveals bimodality in the probability density due to the presence of the particle velocity field. The noise free system becomes too concentrated and does not accurately capture the shape of the SDE and learned solutions, while the SDE and learned solutions are nearly identical.



Figure1: A system of N = 50 particles in a harmonic trap with a harmonic interaction: (A) A single sample trajectory. The mean of the trap β t is shown with a red star, while past positions of the particles are indicated by a fading trajectory. The noise-free system (right) is too concentrated, and fails to capture the variance of the stochastic dynamics (center). The learned system (left) accurately captures the variance, and in addition generates physically interpretable trajectories for the particles. (B) Quantitative comparison to the analytical solution. The learned solution matches the entropy production rate, score, and covariance well. Movie can be found here.

Figure 2: A system of N = 5 soft-spheres in an anharmonic trap: (A) Example particle trajectories in the case of a rotating trap. Trap position shown with a red star. (B/C) A single component of the covariance of the samples, in the case of a rotating trap (B) and a linearly oscillating trap (C).The system learned system agrees well with the SDE, while the noise-free systems under-predicts the moments. (D/E) Prediction of the entropy production rate for a rotating trap (B) and linearly oscillating trap (C). Main figure depicts prediction from SBTM, while the inset depicts the prediction when learning on SDE samples. SBTM captures the temporal evolution of the entropy production rate, while learning on the SDE is initially offset and later divergent. Movies of the circular and linear motion can be viewed here and here, respectively.

j = 1, . . . , N. (D.5) The dynamics for the vector m : R ≥0 → R d, as well as the matrices C d : R ≥0 → R d× d and C o : R ≥0 → R d× d can then be obtained from (D.2) and (D.3) as ṁ

Figure D.3 (for further details, see the caption).

Figure D.1: A system of N = 50 harmonically interacting particles in a harmonic trap: slices of the high-dimensional velocity field. Cross sections of the velocity field for N = 50 harmonically interacting particles in a moving harmonic trap. Columns depict the learned, analytical, noise-free, and error between the learned and analytical velocity fields, respectively. Rows indicate different time points, corresponding to t = 1.25, 2.5, 3.75, and 5.0, respectively. Each velocity field is plotted as a function of a single particle's coordinate (denoted as x and y); all other particle coordinates are fixed to be at the location of a sample. Color depicts the magnitude of the velocity field while arrows indicate the direction. Learned, analytical, and noise-free share a colorbar for direct comparison; the error occurs on a different scale and is plotted with its own colorbar. White circles in the error plot indicate samples projected onto the xy plane; locations of low error correlate well with the presence of samples.

Figure D.7 depicts the learned velocity field v t (x) = b t (x) -Ds t (x). The figure highlights the structure of the steady-state current, which contains an elliptical region with closed orbits. The elliptical region remains roughly fixed in size as time proceeds, while the orbits of the noise-free system in Figure D.8 become increasing compressed. Kernel density estimation demonstrates that an estimated PDF for the samples of learned solution qualitatively matches that of the SDE (Figure D.9).

Figure D.7: An active swimmer: learned velocity. The learned velocity field (right-hand side of (5)) for the active swimmer example. Color indicates the magnitude of the velocity field computed on a grid, while arrows indicate the direction of the velocity field on samples. Time corresponds to progressing in the grid along columns from the top-left to the bottom-right image (t = k × .75 with k the image number, zero-indexed). The learned velocity field converges to closed streamlines that enforce a nonzero steady-state current.

A SOME BASIC FORMULAS

Here, we derive some results linking the solution of the transport equation (TE) with that of the probability flow equation (5).

A.1 PROBABILITY DENSITY AND PROBABILITY CURRENT

We begin with a lemma. Lemma A.1. Let ρ t : Ω → R ≥0 satisfy the transport equation(A.1) Assume that v t (x) is C 2 in both t and x for t ≥ 0 and globally Lipschitz in x. Then, given any t, t ′ ≥ 0, the solution of (A.1) satisfieswhere X τ,t is the probability flow solution to (5). In addition, given any test function ϕ : Ω → R, we haveIn words, Lemma A.1 states that an evaluation of the PDF ρ t at a given point x may be obtained by evolving the probability flow equation (5) backwards to some earlier time t ′ to find the point x ′ that evolves to x at time t, assuming that ρ t ′ (x ′ ) is available. In particular, for t ′ = 0, we obtainandSince the probability current is by definition v t (x)ρ t (x), using (A.4) to express ρ t (x) also gives the follwing equation for the current:Proof. The assumed C 2 and globally Lipschitz conditions on v t guarantee global existence (on t ≥ 0) and uniqueness of the solution to (5). Differentiating ρ t (X t ′ ,t (x)) with respect to t and using ( 5) and (A.1) we deduceEvaluating this expression at x = X t,t ′ (x) and using the group properties (i) X t ′ ,t (X t,t ′ (x)) = x and (ii) X t ′ ,τ (X t,t ′ (x)) = X t,τ (x) gives (A.2). Equation (A.3) can be derived by using (A.2) to express ρ t (x) in the integral at the left hand-side, changing integration variable x → X t ′ ,t (x) and noting that the factor exp -) is precisely the Jacobian of this change of variable. The result is the integral at the right hand-side of (A.3).Lemma A.1 also holds locally in time for any v t (x) that is C 2 in both t and x. In particular, it holds locally if we set s t (x) = ∇ log ρ t (x) and if we assume that ρ 0 (x) is (i) positive everywhere on Ω and (ii) C 3 in x. In this case, (A.1) is the Fokker-Planck equation (FPE) and (A.2) holds for the solution to that equation. 

