BUILDING NORMALIZING FLOWS WITH STOCHASTIC INTERPOLANTS

Abstract

A generative model based on a continuous-time normalizing flow between any pair of base and target probability densities is proposed. The velocity field of this flow is inferred from the probability current of a time-dependent density that interpolates between the base and the target in finite time. Unlike conventional normalizing flow inference methods based the maximum likelihood principle, which require costly backpropagation through ODE solvers, our interpolant approach leads to a simple quadratic loss for the velocity itself which is expressed in terms of expectations that are readily amenable to empirical estimation. The flow can be used to generate samples from either the base or target, and to estimate the likelihood at any time along the interpolant. In addition, the flow can be optimized to minimize the path length of the interpolant density, thereby paving the way for building optimal transport maps. In situations where the base is a Gaussian density, we also show that the velocity of our normalizing flow can also be used to construct a diffusion model to sample the target as well as estimate its score. However, our approach shows that we can bypass this diffusion completely and work at the level of the probability flow with greater simplicity, opening an avenue for methods based solely on ordinary differential equations as an alternative to those based on stochastic differential equations. Benchmarking on density estimation tasks illustrates that the learned flow can match and surpass conventional continuous flows at a fraction of the cost, and compares well with diffusions on image generation on CIFAR-10 and ImageNet 32×32. The method scales ab-initio ODE flows to previously unreachable image resolutions, demonstrated up to 128 × 128.

1. INTRODUCTION

Contemporary generative models have primarily been designed around the construction of a map between two probability distributions that transform samples from the first into samples from the second. While progress has been from various angles with tools such as implicit maps (Goodfellow et al., 2014; Brock et al., 2019) , and autoregressive maps (Menick & Kalchbrenner, 2019; Razavi et al., 2019; Lee et al., 2022) , we focus on the case where the map has a clear associated probability flow. Advances in this domain, namely from flow and diffusion models, have arisen through the introduction of algorithms or inductive biases that make learning this map, and the Jacobian of the associated change of variables, more tractable. The challenge is to choose what structure to impose on the transport to best reach a complex target distribution from a simple one used as base, while maintaining computational efficiency. In the continuous time perspective, this problem can be framed as the design of a time-dependent map, X t (x) with t ∈ [0, 1], which functions as the push-forward of the base distribution at time t = 0 onto some time-dependent distribution that reaches the target at time t = 1. Assuming that these distributions have densities supported on Ω ⊆ R d , say ρ 0 for the base and ρ 1 for the target, this amounts to constructing X t : Ω → Ω such that if x ∼ ρ 0 then X t (x) ∼ ρ t for some density ρ t such that ρ t=0 = ρ 0 and ρ t=1 = ρ 1 . (1) The density ρ t (x) produced by the stochastic interpolant based on (5) between a standard Gaussian density and a Gaussian mixture density with three modes. Also shows in white are the flow lines of the map X t (x) our method produces. One convenient way to represent this time-continuous map is to define it as the flow associated with the ordinary differential equation (ODE) Ẋt (x) = v t (X t (x)), X t=0 (x) = x (2) where the dot denotes derivative with respect to t and v t (x) is the velocity field governing the transport. This is equivalent to saying that the probability density function ρ t (x) defined as the pushforward of the base ρ 0 (x) by the map X t satisfies the continuity equation (see e.g. (Villani, 2009; Santambrogio, 2015) and Appendix A) ∂ t ρ t + ∇ • (v t ρ t ) = 0 with ρ t=0 = ρ 0 and ρ t=1 = ρ 1 , (3) and the inference problem becomes to estimate a velocity field such that (3) holds. Here we propose a solution to this problem based on introducing a time-differentiable interpolant I t : Ω × Ω → Ω such that I t=0 (x 0 , x 1 ) = x 0 and I t=1 (x 0 , x 1 ) = x 1 (4) A useful instance of such an interpolant that we will employ is I t (x 0 , x 1 ) = cos( 1 2 πt)x 0 + sin( 1 2 πt)x 1 , (5) though we stress the framework we propose applies to any I t (x 0 , x 1 ) satisfying (4) under mild additional assumptions on ρ 0 , ρ 1 , and I t specified below. Given this interpolant, we then construct the stochastic process x t by sampling independently x 0 from ρ 0 and x 1 from ρ 1 , and passing them through I t : x t = I t (x 0 , x 1 ), x 0 ∼ ρ 0 , x 1 ∼ ρ 1 independent. (6) We refer to the process x t as a stochastic interpolant. Under this paradigm, we make the following key observations as our main contributions in this work: • The probability density ρ t (x) of x t connecting the two densities, henceforth referred to as the interpolant density, satisfies (3) with a velocity v t (x) which is the unique minimizer of a simple quadratic objective. This result is the content of Proposition 1 below, and it can be leveraged to estimate v t (x) in a parametric class (e.g. using deep neural networks) to construct a generative model through the solution of the probability flow equation ( 2), which we call InterFlow. • By specifying an interpolant density, the method therefore separates the tasks of minimizing the objective from discovering a path between the base and target densities. This is in contrast with conventional maximum likelihood (MLE) training of flows where one is forced to couple the choice of path in the space of measures to maximizing the objective. • We show that the Wasserstein-2 (W 2 ) distance between the target density ρ 1 and the density ρ1 obtained by transporting ρ 0 using an approximate velocity vt in (2) is controlled by our objective function. We also show that the value of the objective on vt during training can be used to check convergence of this learned velocity field towards the exact v t . • We show that our approach can be generalized to shorten the path length of the interpolant density and optimize the transport by additionally maximizing our objective over the interpolant I t (x 0 , x 1 ) and/or adjustable parameters in the base density ρ 0 . • By choosing ρ 0 to be a Gaussian density and using (5) as interpolant, we show that the score of the interpolant density, ∇ log ρ t , can be explicitly related to the velocity field v t . This allows us to draw connection between our approach and score-based diffusion models, providing theoretical groundwork for future exploration of this duality. • We demonstrate the feasibility of the method on toy and high dimensional tabular datasets, and show that the method matches or supersedes conventional ODE flows at lower cost, as it avoids the need to backpropagate through ODE solves. We demonstrate our approach on image generation for CIFAR-10 and ImageNet 32x32 and show that it scales well to larger sizes, e.g. on the 128×128 Oxford flower dataset. FFJORD D ✓ x ✓ ✓ ScoreFlows S/D x ✓ ✓ x Schrödinger Bridge S ✓ x x ✓ InterFlow (Ours) D ✓ ✓ ✓ ✓ Table 1 : Description of the qualities of continuous time transport methods defined by a stochastic or deterministic process. Early works on exploiting transport maps for generative modeling go back at least to Chen & Gopinath (2000) , which focuses on normalizing a dataset to infer its likelihood. This idea was brought closer to contemporary use cases through the work of Tabak & Vanden-Eijnden (2010) and Tabak & Turner (2013) , which devised to expressly map between densities using simple transport functions inferred through maximum likelihood estimation (MLE). These transformations were learned in sequence via a greedy procedure. We detail below how this paradigm has evolved in the case where the map is represented by a neural network and optimized accordingly. 2019) can be optimized through maximum likelihood estimation at the cost of limiting the expressive power of the representation, as the Jacobian of the map must be kept simple to calculate the likelihood. Extending this to the continuous case allowed the Jacobian to be unstructured yet still estimable through trace estimation techniques (Chen et al., 2018; Grathwohl et al., 2019; Hutchinson, 1989 ). Yet, learning this map through MLE requires costly backpropagation through numerical integration. Regulating the path can reduce the number of solver calls (Finlay et al., 2020; Onken et al., 2021) , though this does not alleviate the main structural challenge of the optimization. Our work uses a continuous map X t as well but allows for direct estimation of the underlying velocity. While recent work has also considered simulation-free training by fitting a velocity field, these works present scalability issues (Rozen et al., 2021) and biased optimization (Ben-Hamu et al., 2022) , and are limited to manifolds. Moreover, (Rozen et al., 2021) relies on interpolating directly the probability measures, which can lead to unstable velocities. Score-based flows. Adjacent research has made use of diffusion processes, commonly the Ornstein-Uhlenbeck (OU) process, to connect the target ρ 1 to the base ρ 0 . In this case the transport is governed by a stochastic differential equation (SDE) that is evolved for infinite time, and the challenge of learning a generative model can be framed as fitting the reverse time evolution of the SDE from Gaussian noise back to ρ 1 (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2021b) . Doing so indirectly learns the velocity field by means of learning the score function ∇ log ρ t (x), using the Fischer divergence instead of the MLE objective. While this approach has shown great promise to model high dimensional distributions (Rombach et al., 2022; Hoogeboom et al., 2022) , particularly in the case of text-to-image generation (Ramesh et al., 2022; Saharia et al., 2022) , there is an absence of theoretical motivation for the SDE-flow framework and the complexity it induces. Namely, the SDE must evolve for infinite time to connect the distributions, the parameterization of the time steps remains heuristic (Xiao et al., 2022) , and the criticality of noise, as well as the score, is not absolutely apparent (Bansal et al., 2022; Lu et al., 2022) . In particular, while the objective used in score-based diffusion models was shown to bound the Kullback-Leibler divergence (Song et al., 2021a) , actual calculation of the likelihood requires one to work with the ODE probability flow associated with the SDE. This motivates further research into effective, ODE-driven, approaches to learning the map. Our approach can be viewed as an alternative to score-based diffusion models in which the ODE velocity is learned through the interpolant x t rather than an OU process, leading to greater simplicity and flexibility (as we can connect any two densities exactly over a finite time interval). Bridge-based methods. Heng et al. (2021) propose to learn Schroedinger bridges, which are a entropic regularized version of the optimal transportation plan connecting two densities in finite time, using the framework of score-based diffusion. Similarly, Peluchetti (2022) investigates the use of bridge processes, i.e. SDE whose position is constrained both at the initial and final times, to perform exact density interpolation in finite time. A key difference between these approaches and ours is that they give diffusion-based models, whereas our method builds a probability flow ODE directly using a quadratic loss for its velocity, which is simpler and shown here to be scalable. Interpolants. Co-incident works by Liu et al. (2022) ; Lipman et al. (2022) derive an analogous optimization to us, with a focus on straight interpolants, also contrasting it with score-based methods. Liu et al. (2022) describe an iterative way of rectifying the interpolant path, which can be shown to arrive at an optimal transport map when the procedure is repeated ad infinitum (Liu, 2022) . We also propose a solution to the problem of optimal transport that involves optimizing our objective over the stochastic interpolant.

1.2. NOTATIONS AND ASSUMPTIONS

We assume that the base and the target distribution are both absolutely continuous with respect to the Lebesgue measure on R d , with densities ρ 0 and ρ 1 , respectively. We do not require these densities to be positive everywhere on R d , but we assume that ρ 0 (x) and ρ 1 (x) are continuously differentiable in x. Regarding the interpolant I t : R d × R d → R d , we assume that it is surjective for all t ∈ [0, 1] and satisfies (4). We also assume that I t (x 0 , x 1 ) is continuously differentiable in (t, x 0 , x 1 ), and that it is such that E |∂ t I t (x 0 , x 1 )| 2 < ∞ A few additional technical assumptions on ρ 0 , ρ 1 and I t are listed in Appendix B. Given any function f t (x 0 , x 1 ) we denote E[f t (x 0 , x 1 )] = 1 0 R d ×R d f t (x 0 , x 1 )ρ 0 (x 0 )ρ 1 (x 1 )dx 0 dx 1 dt (8) its expectation over t, x 0 , and x 1 drawn independently from the uniform density on [0, 1], ρ 0 , and ρ 1 , respectively. We use ∇ to denote the gradient operator.

2. STOCHASTIC INTERPOLANTS AND ASSOCIATED FLOWS

Our first main theoretical result can be phrased as follows: Proposition 1. The stochastic interpolant x t defined in (6) with I t (x 0 , x 1 ) satisfying (4) has a probability density ρ t (x) that satisfies the continuity equation (3) with a velocity v t (x) which is the unique minimizer over vt (x) of the objective G(v) = E |v t (I t (x 0 , x 1 ))| 2 -2∂ t I t (x 0 , x 1 ) • vt (I t (x 0 , x 1 )) In addition the minimum value of this objective is given by G(v) = -E |v t (I t (x 0 , x 1 ))| 2 = - 1 0 R d |v t (x)| 2 ρ t (x)dxdt > -∞ ( ) Proposition 1 is proven in Appendix B under Assumption B.1. As this proof shows, the first statement of the proposition remains true if the expectation over t is performed using any probability density ω(t) > 0, which may prove useful in practice. We now describe some primary facts resulting from this proposition, itemized for clarity: • The objective G(v) is given in terms of an expectation that is amenable to empirical estimation given samples t, x 0 , and x 1 drawn from ρ 0 , ρ 1 and U ([0, 1]). Below, we will exploit this property to propose a numerical scheme to perform the minimization of G(v). • While the minimizer of the objective G(v) is not available analytically in general, a notable exception is when ρ 0 and ρ 1 are Gaussian mixture densities and we use the trigonometric interpolant (5) or generalization thereof, as discussed in Appendix C. • The minimal value in (10) achieved by the objective implies that a necessary (albeit not sufficient) condition for v = v is G(v) = G(v) + E |v t (I t (x 0 , x 1 ))| 2 = 0. In our numerical experiments we will monitor this quantity. This minimal value also suggests to maximize G(v) = min v G(v) with respect to additional control parameters (e.g. the interpolant) to shorten the W 2 length of the path {ρ t (x) : t ∈ [0, 1]}. In Appendix D, we show that this procedure achieves optimal transport under minimal assumptions. • The last bound in (10), which is proven in in Lemma B.2, implies that the path length is always finite, even if it is not the shortest possible. Let us now provide some intuitive derivation of the statements of Proposition 1: Continuity equation. By definition of the stochastic interpolant x t we can express its density ρ t (x) using the Dirac delta distribution as ρ t (x) = R d ×R d δ (x -I t (x 0 , x 1 )) ρ 0 (x 0 )ρ 1 (x 1 )dx 0 dx 1 . ( ) Since I t=0 (x 0 , x 1 ) = x 0 and I t=1 (x 0 , x 1 ) = x 1 by definition, we have ρ t=0 = ρ 0 and ρ t=1 = ρ 1 , which means that ρ t (x) satisfies the boundary conditions at t = 0, 1 in (3). Differentiating (12) in time using the chain rule gives ∂ t ρ t (x) = - R d ×R d ∂ t I t (x 0 , x 1 ) • ∇δ (x -I t (x 0 , x 1 )) ρ 0 (x 0 )ρ 1 (x 1 )dx 0 dx 1 ≡ -∇ • j t (x) (13) where we defined the probability current j t (x) = R d ×R d ∂ t I t (x 0 , x 1 )δ (x -I t (x 0 , x 1 )) ρ 0 (x 0 )ρ 1 (x 1 )dx 0 dx 1 . ( ) Therefore if we introduce the velocity v t (x) via v t (x) = j t (x)/ρ t (x) if ρ t (x) > 0, 0 else we see that we can write (13) as the continuity equation in (3). Variational formulation. Using the expressions in ( 12) and ( 14) for ρ t (x) and j t (x) shows that we can write the objective (9) as G(v) = 1 0 R d |v t (x)| 2 ρ t (x) -2v t (x) • j t (x) dxdt Since ρ t (x) and j t (x) have the same support, the minimizer of this quadratic objective is unique for all (t, x) where ρ t (x) > 0 and given by (15). Minimum value of the objective. Let v t (x) be given by ( 15) and consider the alternative objective H(v) = 1 0 R d |v t (x) -v t (x)| 2 ρ t (x)dxdt = 1 0 R d |v t (x)| 2 ρ t (x) -2v t (x) • j t (x) + |v t (x)| 2 ρ t (x) dxdt where we expanded the square and used the identity v t (x)ρ t (x) = j t (x) to get the second equality. The objective G(v) given in ( 16) can be written in term of H(v) as G(v) = H(v) - 1 0 R d |v t (x)| 2 ρ t (x)dxdt = H(v) -E |v t (I t (x 0 , x 1 ))| 2 (18) The equality in (10) follows by evaluating (18) at vt (x) = v t (x) using H(v) = 0. Optimality gap. The argument above shows that we can use E |v t (I t (x 0 , x 1 )) -∂ t I t (x 0 , x 1 )| 2 as alternative objective to G(v) since their first variations coincide. However, it should be stressed that this quadratic objective remains strictly positive at v = v in general so it offers no baseline measure of convergence. To see why, complete the square in G(v) to write (18) as H(v) = E |v t (I t ) -∂ t I t | 2 -E |∂ t I t | 2 + E |v t (I t )| 2 ≥ -E |∂ t I t | 2 + E |v t (I t )| 2 (19) where we used the shorthand notation I t = I t (x 0 , x 1 ) and ∂ t I t = ∂ t I t (x 0 , x 1 ). Evaluating this inequality at v = v using H(v) = 0 we deduce E |∂ t I t | 2 ≥ E |v t (I t )| 2 However we stress that this inequality is not saturated, i.e. E |v t (I t )| 2 ̸ = E |∂ t I t | 2 , in general (see Remark B.3). Hence E |v t (I t ) -∂ t I t | 2 = min v E |v t (I t ) -∂ t I t | 2 = min v G(v) + E |∂ t I t | 2 = -E |v t (I t )| 2 + E |∂ t I t | 2 ≥ 0. Optimizing the transport. It is natural to ask whether our stochastic interpolant construction can be amended or generalized to derive optimal maps. Here, we state a positive answer to this question by showing that the maximizing the objective G(v) in ( 9) with respect to the interpolant yields a solution to the optimal transport problem in the framework of Benamou & Brenier (2000) . This is proven in Appendix D, where we also discuss how to shorten the path length in density space by optimizing adjustable parameters in the base density ρ 0 . Experiments are given in Appendix H. Since our primary aim here is to construct a map T = X t=1 that pushes forward ρ 0 onto ρ 1 , but not necessarily to identify the optimal one, we leave the full investigation of the consequences of these results for future work, but state the proposition explicitly here. In their seminal paper, Benamou & Brenier (2000) showed that finding the optimal map requires solving the minimization problem min (v, ρ) 1 0 R d |v t (x)| 2 ρt (x)dxdt subject to: ∂ t ρt + ∇ • vt ρt = 0, ρt=0 = ρ 0 , ρt=1 = ρ 1 . The minimizing coupling (ρ * t , ϕ * t ) for gradient field v * t (x) = ∇ϕ * t (x) is unique and satisfies: ∂ t ρ * t + ∇ • ∇ϕ * t ρ * t = 0, ρ * t=0 = ρ 0 , ρ * t=1 = ρ 1 , ∂ t ϕ * t + 1 2 |∇ϕ * t | 2 = 0. ( ) In the interpolant flow picture, ρ t (x) is fixed by the choice of interpolant I t (x 0 , x 1 ), and in general ρ t (x) ̸ = ρ * t (x). Because the value of the objective in ( 22) is equal to the minimum of G(v) given in ( 10), a natural suggestion to optimize the transport is to maximize this minimum over the interpolant. Under some assumption on the Benamou-Brenier density ρ * t (x) solution of ( 23), this procedure works. We show this through the use of interpolable densities as discussed in Mikulincer & Shenfeld (2022) and defined in D.1. Proposition 2. Assume that (i) the optimal density function ρ * t (x) minimizing ( 22) is interpolable and (ii) (23) has a classical solution. Consider the max-min problem max Î min v G(v) where G(v) is the objective in (9) and the maximum is taken over interpolants satisfying (4). Then a maximizer of (24) exists, and any maximizer I * t (x 0 , x 1 ) is such that the probability density function of x * t = I * t (x 0 , x 1 ), with x 0 ∼ ρ 0 and x 1 ∼ ρ 1 independent, is the optimal ρ * t (x), the mimimizing velocity is v * t (x) = ∇ϕ * t (x), and the pair (ρ * t (x), ϕ * t (x)) satisfies (23). The proof of Proposition 2 is given in Appendix D, along with further discussion. Proposition 2 relies on Lemma D.3 that reformulates (22) in a way which shows this problem is equivalent to the max-min problem in (24) for interpolable densities.

2.1. WASSERSTEIN BOUNDS

The following result shows that the objective in (17) controls the Wasserstein distance between the target density ρ 1 and the the density ρ1 obtained as the pushforward of the base density ρ 0 by the map Xt=1 associated with the velocity vt : Proposition 3. Let ρ t (x) be the exact interpolant density defined in (12) and, given a velocity field vt (x), let us define ρt (x) as the solution of the initial value problem ∂ t ρt + ∇ • (v t ρt ) = 0, ρt=0 = ρ 0 (25) Assume that vt (x) is continuously differentiable in (t, x) and Lipschitz in x uniformily on (t, x) ∈ [0, 1] × R d with Lipschitz constant K. Then the square of the W 2 distance between ρ 1 and ρ1 is bounded by W 2 2 (ρ 1 , ρ1 ) ≤ e 1+2 K H(v) (26) where H(v) is the objective function defined in (17). The proof of Proposition 3 is given in Appendix E: it leverages the following bound on the square of W-2 distance W 2 2 (ρ 1 , ρ1 ) ≤ 1 0 R d |X t=1 (x) -Xt=1 (x)| 2 ρ 0 (x)dxdt (27) where X t is the flow map solution of (2) with the exact v t (x) defined in (15) and Xt is the flow map obtained by solving (2) with v t (x) replaced by vt (x).

2.2. LINK WITH SCORE-BASED GENERATIVE MODELS

The following result shows that if ρ 0 is a Gaussian density, the velocity v t (x) can be related to the score of the density ρ t (x): Proposition 4. Assume that the base density ρ 0 (x) is a standard Gaussian density N (0, Id) and suppose that the interpolant I t (x 0 , x 1 ) is given by (5). Then the score ∇ log ρ t (x) is related to velocity v t (x) as ∇ log ρ t (x) =      -x - 2 π tan( 1 2 πt)v t (x) if t ∈ [0, 1) -x - 4 π 2 ∂ t v t (x)| t=1 if t = 1. The proof of this proposition is given in Appendix F. The first formula for t ∈ [0, 1) is based on a direct calculation using Gaussian integration by parts; the second formula at t = 1 is obtained by taking the limit of the first using v t=1 (x) = 0 from (B.20) and l'Hôpital's rule. It shows that we can in principle resample ρ t at any t ∈ [0, 1] using the stochastic differential equation in artificial time τ whose drift is the score ŝt (x) obtained by evaluating (28) on the estimated vt (x): dx τ = -ŝ t (x τ )dτ + √ 2dW τ . Similarly, the score ŝt (x) could in principle be used in score-based diffusion models, as explained in Appendix F. We stress however that while our velocity is well-behaved for all times in [0, 1], as shown in (B.20), the drift and diffusion coefficient in the associated SDE are singular at t = 0, 1.

3. PRACTICAL IMPLEMENTATION AND NUMERICAL EXPERIMENTS

The objective detailed in Section 2 is amenable to efficient empirical estimation, see (I.1), which we utilize to experimentally validate the method. Moreover, it is appealing to consider a neural network parameterization of the velocity field. In this case, the parameters of the model v can be optimized through stochastic gradient descent (SGD) or its variants, like Adam (Kingma & Ba, 2015) . Following the recent literature regarding density estimation, we benchmark the method on visualizable yet complicated densities that display multimodality, as well as higher dimensional tabular data initially provided in Papamakarios et al. (2017) and tested in other works such as Grathwohl et al. (2019) . The 2D test cases demonstrate the ability to flow between empirical densities with no known analytic form. In all cases, numerical integration for sampling is done with the Dormand-Prince, explicit Runge-Kutta of order (4)5 (Dormand & Prince, 1980) . In Sections 3.1-3.4 the choice of interpolant for experimentation was selected to be that of ( 5), as it is the one used to draw connections to the technique of score based diffusions in Proposition 4. In Section H we use the interpolant (B.2) and optimize a t and b t to investigate the possibility and impact of shortening the path length.

3.1. 2D DENSITY ESTIMATION

An intuitive first test to benchmark the validity of the method is sampling a target density whose analytic form is known or whose density can be visualized for comparison. To this end, we follow the practice of choosing a few complicated 2-dimensional toy datasets, namely those from (Grathwohl et al., 2019) , which were selected to differentiate the flexibility of continuous flows from discrete time flows, which cannot fully separate the modes. We consider anisotropic curved densities, a mixture of 8 separated Gaussians, and a checkerboard density. The velocity field of the interpolant Optimizer is performed on G(v) for 10k epochs. We plot a kernel density estimate over 80k samples from both the flow and true distribution in Figure 2 . The interpolant flow captures all the modes of the target density without artificial stretching or smearing, evincing a smooth map.

3.2. DATASET INTERPOLATION

As described in Section 2, the velocity field associated to the flow can be inferred from arbitrary densities ρ 0 , ρ 1 -this deviates from the score-based diffusion perspective, in which one distribution must be taken to be Gaussian for the training paradigm to be tractable. In Figure 2 , we illustrate this capacity by learning the velocity field connecting the anisotropic swirls distribution to that of the checkerboard. The interpolant formulation allows us to draw samples from ρ t at any time t ∈ [0, 1], which we exploit to check that the velocity field is empirically correct at all times on the interval, rather than just at the end points. This aspect of interpolants is also noted in (Choi et al., 2022) , but for the purpose of density ratio estimation. The above observation highlights an intrinsic difference of the proposed method compared to MLE training of flows, where the map that is the minimizer of G(v) is not empirically known. We stress that query access to ρ 0 or ρ 1 is not needed to use our interpolation procedure since it only uses samples from these densities.

3.3. TABULAR DATA FOR HIGHER DIMENSIONAL TESTING

A set of tabular datasets introduced by (Papamakarios et al., 2017) has served as a consistent test bed for demonstrating flow-based sampling and its associated density estimation capabilities. We continue that practice here to provide a benchmark of the method on models which provide an exact likelihood, separating and comparing to exemplary discrete and continuous flows: MADE (Germain et al., 2015) , Real NVP (Dinh et al., 2017) , Convex Potential Flows (CPF) (Huang et al., 2021) , Neural Spline Flows (NSP) Durkan et al. (2019) , Free-form continuous flows (FFJORD) (Grathwohl et al., 2019) , and OT-Flow (Finlay et al., 2020) . Our primary point of comparison is to other continuous time models, so we sequester them in benchmarking. We train the interpolant flow model on each target dataset listed in Table 2 , choosing the reference distribution of the interpolant ρ 0 to be a Gaussian density with mean zero and variance I d , where d is the data dimension. The architectures and hyperparameters are given in Appendix I. We highlight some of the main characteristics of the models here. In each case, sampling of the time t was reweighted according to a beta distribution, with parameters α, β provided in the same appendix. Results from the tabular experiments are displayed in Table 2 , in which the negative log-likelihood averaged over a test set of held out data is computed. We note that the interpolant flow achieves better or equivalent held out likelihoods on all ODE based models, except BSDS300, in which the FFJORD outperforms the interpolant by ∼ 0.6%. We note upwards of 30% improvements compared to baselines. Note that these likelihoods are achieved without direct optimization of it.

3.4. UNCONDITIONAL IMAGE GENERATION

To compare with recent advances in continuous time generative models such as DDPM (Ho et al., 2020) , Score SDE (Song et al., 2021b) , and ScoreFlow (Song et al., 2021a) , we provide a demonstration of the interpolant flow method on learning to unconditionally generate images trained from the CIFAR-10 ( Krizhevsky et al., 2009) and ImageNet 32×32 datasets (Deng et al., 2009; Van Den Oord et al., 2016) , which follows suit with ScoreFlow and Variational Diffusion Models (VDM) (Kingma et al., 2021) . We train an interpolant flow built from the U-Net architecture from DDPM (Ho et al., 2020 ) on a single NVIDIA A100 GPU, which was previously impossible under maximum likelihood training of continuous time flows. Experimental details can be found in Appendix I. Note that we used a beta distribution reweighting of the time sampling as in the tabular experiments. Table 2 provides a comparison of the negative log likelihoods (NLL), measured in bits per dim (BPD) and Frechet Inception Distance (FID), of our method compared to past flows and state of the art diffusions. We focus our comparison against models which emit a likelihood, as this is necessary to compare NLL. We compare to other flows FFJORD and Glow (Grathwohl et al., 2019;  We introduced a continuous time flow method that can be efficiently trained. The approach has a number of intriguing and appealing characteristics. The training circumvents any backpropagation through ODE solves, and emits a stable and interpretable quadratic objective function. This objective has an easily accessible diagnostic which can verify whether a proposed minimizer of the loss is a valid minimizer, and controls the Wasserstein-2 distance between the model and the target. One salient feature of the proposed method is that choosing an interpolant I t (x 0 , x 1 ) decouples the optimization problem from that of also choosing a transport path. This separation is also exploited by score-based diffusion models, but our approach offers better explicit control on both. In particular we can interpolate between any two densities in finite time and directly obtain the probability flow needed to calculate the likelihood. Moreover, we showed in Section 2 and Appendices D and G that the interpolant can be optimized to achieve optimal transport, a feature which can reduce the cost of solving the ODE to draw samples. In future work, we will investigate more thoroughly realizations of this procedure by learning the interpolant I t in a wider class of functions, in addition to minimizing G(v). The intrinsic connection to score-based diffusion presented in Proposition 4 may be fruitful ground for understanding the benefits and tradeoffs of SDE vs ODE approaches to generative modeling. Exploring this relation is already underway (Lu et al., 2022; Boffi & Vanden-Eijnden, 2022) , and can hopefully provide theoretical insight into designing more effective models. Integrating this equation in t from t = t ′ to t = t gives ρ t (X t ′ ,t (x)) = ρ t ′ (x) exp - t t ′ ∇ • v s (X t ′ ,s (x))ds (A.8) Evaluating this expression at x = X t,t ′ (x) and using the group properties (i) X t ′ ,t (X t,t ′ (x)) = x and (ii) X t ′ ,s (X t,t ′ (x)) = X t,s (x) gives (A.2). Equation (A.4) can be derived by using (A.2) to express ρ t (x) in the integral at the left hand-side, changing integration variable x → X t ′ ,t (x) and noting that the factor exp - t t ′ ∇ • v s (X t,s (x) )ds is precisely the Jacobian of this change of variable. The result is the integral at the right hand-side of (A.4).

B PROOF OF PROPOSITION 1

We will work under the following assumption: Assumption B.1. The densities ρ 0 (x) and ρ 1 (x) are continuously differentiable in x; I t (x 0 , x 1 ) is continuously differentiable in (t, x 0 , x 1 ) and satisfies (4) and (7); and for all t ∈ [0, 1] we have R d R d ×R d e ik•It(x0,x1) ρ 0 (x 0 )ρ 1 (x 1 )dx 0 dx 1 dk < ∞ R d R d ×R d ∂ t I t (x 0 , x 1 )e ik•It(x0,x1) ρ 0 (x 0 )ρ 1 (x 1 )dx 0 dx 1 dk < ∞ (B.1) This structural assumption guarantees that the stochastic interpolant x t has a probability density and a probability current. As shown in Appendix C it is satisfied e.g. if ρ 0 and ρ 1 are Gaussian mixture densities and I t (x 0 , x 1 ) = a t x 0 + b t x 1 , (B.2) where a t and b t are C 1 function of t ∈ [0, 1] satisfying ȧt ≤ 0, ḃt ≥ 0, a 0 = 1, a 1 = 0, b 0 = 0, b 1 = 1, a t > 0 on t ∈ [0, 1), b t > 0 on t ∈ (0, 1]. (B. 3) The interpolant (4) is in this class for the choice a t = cos( 1 2 πt), b t = sin( 1 2 πt) (B.4) Our proof of Proposition 1 will rely on the following result that quantifies the probability density and the probability current of the stochastic interpolant x t defined in (6): Lemma B.1. If Assumption B.1 holds, then the stochastic interpolant x t defined in (6) has a probability density function ρ t (x) given by ρ t (x) = (2π) -d R d ×R d ×R d e -ik•(x-It(x0,x1)) ρ 0 (x 0 )ρ 1 (x 1 )dx 0 dx 1 dk (B.5) and it satisfies the continuity equation ∂ t ρ t (x) + ∇ • j t (x) = 0, ρ t=0 (x) = ρ 0 (x), ρ t=1 (x) = ρ 1 (x) (B.6) withe the probability current j t (x) given by j t (x) = (2π) -d R d ×R d ×R d ∂ t I t (x 0 , x 1 )e -ik•(x-It(x0,x1)) ρ 0 (x 0 )ρ 1 (x 1 )dx 0 dx 1 dk (B.7) In addition the action of ρ t (x) and j t (x) against any test function ϕ : R d → R can be expressed as R d ϕ(x)ρ t (x)dx = R d ×R d ϕ(I t (x 0 , x 1 ))ρ 0 (x 0 )ρ 1 (x 1 )dx 0 dx 1 (B.8) R d ϕ(x)j t (x)dx = R d ×R d ×R d ∂ t I t (x 0 , x 1 )ϕ(I t (x 0 , x 1 ))ρ 0 (x 0 )ρ 1 (x 1 )dx 0 dx 1 (B.9) Note that (B.8) and (B.9) can be formally rewritten as ( 12) and ( 14) using the Dirac delta distribution. Proof. By definition of x t in (6), the characteristic function of this random variable is E exp(ik • x t ) = R d ×R d e ik•It(x0,x1) ρ 0 (x 0 )ρ 1 (x 1 )dx 0 dx 1 (B.10) Under Assumption B.1, the Fourier inversion theorem implies that x t has a density ρ t (x) given by (B.5). Taking the time derivative of this density gives ∂ t ρ t (x) = (2π) -d R d ×R d ×R d ik • ∂ t I t (x 0 , x 1 )e -ik•(x-It(x0,x1)) ρ 0 (x 0 )ρ 1 (x 1 )dx 0 dx 1 dk = -∇ • j t (x) (B.11) with j t (x) given by (B.7). Lemma B.1 shows that ρ t (x) satisfies the continuity equation ( 3) with the velocity field v t (x) defined in (15). It also show that the objective function G(v) in ( 9) is well-defined, which implies the first part Proposition 1. For the second part, we will need Lemma B.2. If Assumption B.1 holds, then 1 0 R d |v t (x)| 2 ρ t (x)dxdt = E |v t (I t (x 0 , x 1 ))| 2 ≤ E |∂ t I t (x 0 , x 1 )| 2 < ∞ (B.12) Proof. For K < ∞, define ϕ K t (x) = 1 if |v t (x)| ≤ K 0 else (B.13) Then, using the pointwise identity v t (x)ρ t (x) = j t (x) as well as (B.8) and (B.9), we can write 0 = 1 0 R d ϕ K t (x) 2|v t (x)| 2 ρ t (x) -2v t (x) • j t (x) dxdt = 2E ϕ K t (I t )|v t (I t )| 2 -2E ϕ K t (I t )∂ t I t • v t (I t ) = E ϕ K t (I t )|v t (I t )| 2 + E ϕ K t (I t )|v t (I t ) -∂ t I t | 2 -E ϕ K t (I t )|∂ t I t | 2 ≥ E ϕ K t (I t )|v t (I t )| 2 -E ϕ K t (I t )|∂ t I t | 2 . (B.14) where we use the shorthand I t = I t (x 0 , x 1 ) and  ∂ t I t = ∂ t I t (x 0 , x 1 ). Therefore 0 ≤ E ϕ K t (I t )|v t (I t )| 2 ≤ E ϕ K t (I t )|∂ t I t | 2 . (B.15) Since lim K→∞ E ϕ K t (I t )|∂ t I t | 2 = E |∂ t I t | (x) = 1 4 π 2 m 2 cos 2 ( 1 2 πt) , so that R |v t (x)| 2 ρ t (x)dx = 1 4 π 2 m 2 cos 2 ( 1 2 πt) (B.16) At the same time R×R |∂ t I t (x 0 , x 1 )| 2 ρ 0 (x 0 )ρ 1 (x 1 )dx 0 dx 1 = 1 4 π 2 R×R -sin( 1 2 πt)x 0 + cos( 1 2 πt)x 1 2 ρ 0 (x 0 )ρ 1 (x 1 )dx 0 dx 1 = 1 4 π 2 sin 2 ( 1 2 πt) + cos 2 ( 1 2 πt)(1 + m 2 ) = 1 4 π 2 1 + m 2 cos 2 ( 1 2 πt) (B.17) and so E |v t (I t (x 0 , x 1 ))| 2 = E |∂ t I t (x 0 , x 1 ))| 2 -1 4 π 2 < E |∂ t I t ((x 0 , x 1 ))| 2 . The interpolant density ρ t (x) and the current j t (x) are given explicitly in Appendix C in the case where ρ 0 and ρ 1 are both Gaussian mixture densities and we use the linear interpolant (B.2). Notice that we can evaluate v t=0 (x) and v t=1 (x) more explicitly. For example, with the linear interpolant (B.2) we have j t=0 (x) = ȧ0 xρ 0 (x) + ḃ0 ρ 0 (x) R d x 1 ρ 1 (x 1 )dx 1 , j t=1 (x) = ḃ1 xρ 1 (x) + ȧ1 ρ 1 (x) R d x 0 ρ 0 (x 0 )dx 0 . (B.18) From ( 12), this implies v 0 (x) = ȧ0 x + ḃ0 R d x 1 ρ 1 (x 1 )dx 1 , v 1 (x) = ḃ1 x + ȧ1 R d x 0 ρ 0 (x 0 )dx 0 (B.19) For the trigonometric interpolant that uses (B.4) these reduce to v t=0 (x) = 1 2 π R d x 1 ρ 1 (x 1 )dx 1 , v t=1 (x) = -1 2 π R d x 0 ρ 0 (x 0 )dx 0 . (B.20) Finally, we note that the result of Proposition 1 remains valid if we work with velocities that are gradient fields. We state this result as: Proposition B.4. The statements of Proposition 1 hold if G(v) is minimized over velocities that are gradient fields, in which case the minimizer is of the form v t (x) = ∇ϕ t (x) for some potential ϕ t : R d → R uniquely defined up to a constant. Minimizing G(v) over gradient fields vt (x) = ∇ φt (x) guarantees that the minimizer does not contain any component ṽt (x) such that ∇ • (ṽ t (x)ρ t (x)) = 0. Such a component of the velocity has no effect on the evolution of ρ t (x) but affects the map X t , the solution of (2). We stress however that: even if we minimize G(v) over velocities that are not gradient fields, the minimizer v t (x) produces a map X t via (2) that satisfies the pushforward condition in (1). Proof. Define the potential ϕ t : R d → R as the solution to ∇ • (ρ t ∇ϕ t ) = ∇ • j t (B.21) with ρ t (x) and j t (x) given by ( 12) and ( 14), respectively. This is a Poisson equation for ϕ t (x) which has a unique (up to a constant) solution by the Fredholm alternative since ρ t (x) and j t (x) have the same support and R d ∇ • j t (x)dx = 0. In terms of ϕ t (x), (13) can therefore be written as ∂ t ρ t + ∇ • (ρ t ∇ϕ t ) = 0 (B.22) The velocity ∇ϕ t (x) is also the unique minimizer of the objective (9) over gradient fields since if we set vt (x) = ∇ φt (x) and optimize G(∇ φ) over φ, the Euler-Lagrange equation for the minimizer is precisely the Poisson equation (B.21). The lower bound on the objective evaluated at ∇ϕ t (x) still holds since in the argument above involving ( 17) and ( 18) can be made by replacing the identity v t (x)ρ t (x) = j t (x) with R d ∇ φt (x) • ∇ϕ t (x)ρ t (x)dx = R d ∇ φt (x) • j t (x)dx, which is (B.21) written in weak form. Published as a conference paper at ICLR 2023 Interestingly, with v t (x) defined in (15) and ϕ t (x) defined as the solution to (B.21), we have R d |v t (x)| 2 ρ t (x)dx ≥ R d |∇ϕ t (x)| 2 ρ t (x)dx (B.23) consistent with the fact that the gradient field ∇ϕ t (x) is the velocity that minimizes R d |v t (x)| 2 ρ t (x)dx over all vt (x) such that ∂ t ρ t + ∇ • (v t ρ t ) = 0 with ρ t (x) given by ( 12). We stress however that, even if we work we gradient fields, the inequality ( 21) is not saturated in general. Remark B.5. If we assume that vt (x) = ∇ φt (x), we can write the objective in (16) as G(∇ φ) = 1 0 R d |∇ φt (x)| 2 ρ t (x) -2∇ φt (x) • j t (x) dxdt = 1 0 R d |∇ φt (x)| 2 ρ t (x) + 2 φt (x)∇ • j t (x) dxdt = 1 0 R d |∇ φt (x)| 2 ρ t (x) -2 φt (x)∂ t ρ t (x) dxdt = 1 0 R d |∇ φt (x)| 2 ρ t (x) + 2∂ t φt (x)ρ t (x) dxdt + 2 R d φt=0 (x)ρ 0 (x) -φt=1 (x)ρ 1 (x) dx, (B.24) where we integrated by parts in x to get the second equality, we used the continuity equation (13) to get the second, and integrated by parts in t to get the third using ρ t=0 = ρ 0 and ρ t=1 = ρ 1 . Since the objective in the last is an expectation with respect to ρ t , we can evaluate it as G(∇ φ) = E |∇ φt (I t )| 2 + 2∂ t φt (I t ) + 2E 0 φt=0 -2E 1 φt=1 , (B.25) where E 1 and E 0 denote expectations with respect to ρ 1 and ρ 0 , respectively. Therefore, we could use (B.25) as alternative objective to obtain v t (x) = ∇ϕ t (x) by minimization. This objective is, up to a sign and a factor 2, the KILBO objective introduced in Neklyudov et al. (2022) . Notice that using the original objective in (9) avoids the computation of derivatives in x and t, and allows one to work with vt (x) directly.

C THE CASE OF GAUSSIAN MIXTURE DENSITIES

Here we consider the case where ρ 0 and ρ 1 are both Gaussian mixture densities. We denote N (x|m, C) = (2π) -d/2 [det C] -1/2 exp -1 2 (x -m) T C -1 (x -m) = (2π) -d R d e ik•(x-m)-1 2 k T Ck dk (C.1) the Gaussian probability density with mean vector m ∈ R d and positive-definite symmetric covariance matrix C = C T ∈ R d × R d . We assume that ρ 0 (x) = N0 i=1 p 0 i N (x|m 0 i , C 0 i ), ρ 1 (x) = N1 i=1 p 1 i N (x|m 1 i , C 1 i ) (C.2) where N 0 , N 1 ∈ N, p 0 i > 0 with N0 i=1 p 0 i = 1, m 0 i ∈ R d , C 0 i = (C 0 i ) T ∈ R d × R d , positivedefinite, and similarly for p 1 i , m 1 i , and C 1 i . We assume that the interpolant is of the form (B.2) and we denote m ij t = a t m 0 i + b t m 1 j , C ij t = a 2 t C 0 i + b 2 t C 1 j , i = 1, . . . , N 0 , j = 1, . . . , N 1 (C.3) Note that if all the covariance matrices are the same, C 0 i = C 1 j = C, with the trigonometric interpolant in (5) we have C ij t = C, which justifies this choice of interpolant. We have: Proposition C.1. The interpolant density ρ t (x) obtained by connecting the probability densities in (C.2) using the linear interpolant (B.2) is given by ρ t (x) = N0 i=1 N1 j=1 p 0 i p 1 j N (x|m ij t , C ij t ) (C.4) and it satisfies the continuity equation ∂ t ρ t (x) + ∇ • j t (x) = 0 with the current j t (x) = N0 i=1 N1 j=1 p 0 i p 1 j ṁij t + 1 2 Ċij t (C ij t ) -1 (x -m ij t ) N (x|m ij t , C ij t ) (C.5) This proposition implies that v t (x) = N0 i=1 N1 j=1 p 0 i p 1 j ṁij t + 1 2 Ċij t (C ij t ) -1 (x -m ij t ) N (x|m ij t , C ij t ) N0 i=1 N1 j=1 p 0 i p 1 j N (x|m ij t , C ij t ) (C.6) This velocity field is growing at most linearly in x, and when the mode of the Gaussian are well separated, in each mode it approximately reduces to ṁij t + 1 2 Ċij t (C ij t ) -1 (x -m ij t ) (C.7) Proof. Using the Fourier representation in (C.1) and proceeding as in the proof of Lemma B.1, we deduce that ρ t (x) is given by ρ t (x) = (2π) -d N0 i=1 N1 j=1 p 0 i p 1 j R d e ik•(x-m ij t )- 1 2 k T C ij t k dk. (C.8) Performing the integral over k gives (C.4). Taking the time derivative of this density gives ∂ t ρ t (x) = -(2π) -d N0 i=1 N1 j=1 p 0 i p 1 j R d ik • ṁij t + 1 2 k T Ċij t k e ik•(x-m ij t )- 1 2 k T C ij t k dk = -(2π) -d N0 i=1 N1 j=1 p 0 i p 1 j R d ik • ṁij t -1 2 i Ċij t k e ik•(x-m ij t )- 1 2 k T C ij t k dk = -∇ • j t (x) (C.9) with j t (x) = (2π) -d N0 i=1 N1 j=1 p 0 i p 1 j R d ( ṁij t -1 2 i Ċij t k)e ik•(x-m ij t )- 1 2 k T C ij t k dk = (2π) -d N0 i=1 N1 j=1 p 0 i p 1 j ṁij t R d e ik•(x-m ij t )- 1 2 k T C ij t k dk -1 2 (2π) -d N0 i=1 N1 j=1 p 0 i p 1 j Ċij t ∇ R d e ik•(x-m ij t )- 1 2 k T C ij t k dk = N0 i=1 N1 j=1 p 0 i p 1 j ṁij t -1 2 Ċij t ∇ N (x|m ij t , C ij t ) = N0 i=1 N1 j=1 p 0 i p 1 j ṁij t + 1 2 Ċij t (C ij t ) -1 (x -m ij t ) N (x|m ij t , C ij t ) (C.10)

D OPTIMIZING TRANSPORT THROUGH STOCHASTIC INTERPOLANTS

Using the velocity v t (x) in ( 15) that minimizes the objective (9) in the ODE (2) gives an exact transport map T = X t=1 from ρ 0 to ρ 1 . However, this map is not optimal in general, in the sense that it does not minimize R d | T (x) -x| 2 ρ 0 (x)dx (D.1) over all T such that T ♯ρ 0 = ρ 1 . It is easy to understand why: In their seminal paper Benamou & Brenier (2000) showed that finding the optimal map requires solving the minimization problem min (v, ρ) 1 0 R d |v t (x)| 2 ρt (x)dxdt subject to: ∂ t ρt + ∇ • vt ρt = 0, ρt=0 = ρ 0 , ρt=1 = ρ 1 . (D.2) As also shown in (Benamou & Brenier, 2000) , the velocity minimizing (D.2) is a gradient field, v * t (x) = ∇ϕ * t (x), and the minimizing couple (ρ * t , ϕ * t ) is unique and satisfies ∂ t ρ * t + ∇ • ∇ϕ * t ρ * t = 0, ρ * t=0 = ρ 0 , ρ * t=1 = ρ 1 ∂ t ϕ * t + 1 2 |∇ϕ * t | 2 = 0. (D.3) In contrast, in our construction the interpolant density ρ t (x) is fixed by the choice of interpolant I t (x 0 , x 1 ), and ρ t (x) ̸ = ρ * t (x) in general. As a result, the value of 9) is not the minimum in (D.2). 1 0 R d |v t (x)| 2 ρ t (x)dxdt = E[|v t (I t )| 2 ] for the velocity v t (x) minimizing ( Minimizing ( 9) over gradient fields reduces the value of the objective in (D.2), but it does not yield an optimal map either-indeed the gradient velocity ∇ϕ t (x) with the potential ϕ t (x) solution to (B.21) only minimizes the objective in (D.2) over all vt (x) with ρt (x) = ρ t (x) fixed, as explained after (B.23). It is natural to ask whether our stochastic interpolant construction can be amended or generalized to derive optimal maps. This question is discussed next, from two complementary perspectives: via optimization of the interpolant I t (x 0 , x 1 ), and/or via optimization of the base density ρ 0 , assuming that we have some leeway to choose this density.

D.1 OPTIMAL TRANSPORT WITH OPTIMAL INTERPOLANTS

Since (10) indicates that the minimum of G(v) is lower bounded by minus the value of the objective in (D.2), one way to optimize the transport is to maximize this minimum over the interpolant. Under some assumption on the Benamou-Brenier density ρ * t (x) solution of (D.3), this procedure works, as we show now. Let us begin with a definition: Definition D.1 (Interpolable density). We say that one-parameter family of probability densities ρ t (x) with t ∈ [0, 1] is interpolable (in short: ρ t (x) is interpolable) if there exists a one-parameter family of invertible maps T t : R d → R d with t ∈ [0, 1], continuously-differentiable in time and space, such that ρ t is the pushforward by T t of the Gaussian density with mean zero and covariance identity, i.e. T t ♯N (0, Id) = ρ t for all t ∈ [0, 1]. Interpolable densities form a wide class, as discussed e.g. in Mikulincer & Shenfeld (2022) . These densities also are the ones that can be learned by score-based diffusion modeling discussed in Section 2.2. They are useful for our purpose because of the following result showing that any interpolable density can be represented as an interpolant density: Proposition D.1. Let ρ t (x) be an interpolable density in the sense of Definition D.1. Then I t (x 0 , x 1 ) = T t T -1 0 (x 0 ) cos( 1 2 πt) + T -1 1 (x 1 ) sin( 1 2 πt) (D.4) satisfies (4) and is such that the stochastic interpolant defined in (6) satisfies x t ∼ ρ t . We stress that the interpolant in (D.4) is in general not the only one giving the interpolable density ρ t (x), and the actual value of the map T t plays no role in the results below. Proof. First notice that I t=0 (x 0 , x 1 ) = T 0 (T -1 0 (x 0 )) = x 0 , I t=1 (x 0 , x 1 ) = T 1 (T -1 1 (x 0 )) = x 1 (D.5) so that the boundary condition in (4) are satisfied. Then observe that, by definition of T t , x 0 ∼ ρ 0 ⇒ T -1 0 (x 0 ) ∼ N (0, Id), x 1 ∼ ρ 1 ⇒ T -1 1 (x 1 ) ∼ N (0, Id), (D.6) This implies that, if x 0 ∼ ρ 0 , x 1 ∼ ρ 1 , and they are independent, T -1 0 (x 0 ) cos( 1 2 πt) + T -1 1 (x 1 ) sin( 1 2 πt) (D.7) is a Gaussian random variable with mean zero and covariance Id cos 2 ( 1 2 πt) + Id sin 2 ( 1 2 πt) = Id, (D.8) i.e. it is a sample from N (0, Id). By definition of T t , this implies that x t = I t (x 0 , x 1 ) = T t T -1 0 (x 0 ) cos( 1 2 πt) + T -1 1 (x 1 ) sin( 1 2 πt) (D.9) is a sample from ρ t if x 0 ∼ ρ 0 , x 1 ∼ ρ 1 , and they are independent. Proposition D.1 implies that: Proposition D.2. Assume that (i) the optimal density function ρ * t (x) minimizing (D.2) is interpolable and (ii) (D.3) has classical solution. Consider the max-min problem max Î min v G(v) (D.10) where G(v) is the objective in (9) and the maximum is taken over interpolants satisfying (4). Then a maximizer of (D.10) exists, and any maximizer I * t (x 0 , x 1 ) is such that the probability density function of x * t = I * t (x 0 , x 1 ), with x 0 ∼ ρ 0 and x 1 ∼ ρ 1 independent, is the optimal ρ * t (x), the mimimizing velocity is v * t (x) = ∇ϕ * t (x), and the pair (ρ * t (x), ϕ * t (x)) satisfies (D.3). The proof of Proposition D.2 relies on the following simple reformulation of (D.2): Lemma D.3. The Benamou-Brenier minimization problem in (D.2) is equivalent to the min-max problem max ρ,ȷ min v 1 0 R d 1 2 |v t (x)| 2 ρt (x) -vt (x) • ȷt (x) dxdt = min v max ρ,ȷ 1 0 R d 1 2 |v t (x)| 2 ρt (x) -vt (x) • ȷt (x) dxdt subject to: ∂ t ρt + ∇ • ȷt = 0, ρt=0 = ρ 0 , ρt=1 = ρ 1 (D.11) In particular, under the conditions on ρ 0 and ρ 1 such that (D.2) has a minimizer, the optimizer (ρ * t , v * t , j * t ) is unique and satisfies v * t (x) = ∇ϕ * t (x), j * t (x) = ∇ϕ * t (x)ρ * t (x) with (ρ * t , ϕ * t ) solution to (D.3). Proof. Since (D.11) is convex in v and concave in (ρ, ȷ), the min-max and the max-min are equivalent by von Neumann's minimax theorem. Considering the problem where we minimize over vt (x) first, the minimizer must satisfy: vt (x)ρ t (x) = ȷt (x) (D.12) Since ρt (x) ≥ 0 and ȷt (x) have the same support by the constraint in (D.11), the solution to this equation is unique on this support. Using (D.12) in (D.11), we can therefore rewrite the max-min problem as max ρ -1 2 1 0 R d |v t (x)| 2 ρt (x)dxdt subject to: ∂ t ρt + ∇ • vt ρ t = 0, ρt=0 = ρ 0 , ρt=1 = ρ 1 (D.13) This problem is equivalent to the Benamou-Brenier minimization problem in (D.2). To write the Euler-Lagrange equations for the optimizers of the min-max problem (D.11), let us introduce the extended objective 1 0 R d 1 2 |v t (x)| 2 ρt (x) -vt (x) • ȷt (x) dxdt - 1 0 R d φt (x) ∂ t ρt + ∇ • ȷt (x) dxdt + R d ( φ1 (x)ρ 1 -φ0 (x)ρ 0 )dx (D.14) where φt : R d → R is a Lagrangian multiplier to be determined. The Euler-Lagrange equations can be obtained by taking the first variation of the objective (D.14) over φ, ρ, ȷ, and v, respectively. They read 0 = ∂ t ρ * t + ∇ • j * t , ρ * t=0 = ρ 0 , ρ * t=1 = ρ 1 0 = ∂ t ϕ * t + 1 2 |v * t | 2 , 0 = -v * t + ∇ϕ * t , 0 = v * t ρ * t -j * t . (D.15) These equations imply that v * t (x) = ∇ϕ * t (x), j * t (x) = v * t (x)ρ * t (x) = ∇ϕ * t (x)ρ * t (x), with (ρ * t , ϕ * t ) solution to (D. 3), as stated in the lemma. Since the optimization problem in (D.11) is convex in v and concave in (ρ, ȷ), its optimizer is unique and solves these equations. Proof of Proposition D.2. We can reformulate the max-min problem (D.10) as max ρ,ȷ min v 1 0 R d 1 2 |v t (x)| 2 ρt (x) -vt (x) • ȷt (x) dxdt (D.16) where the maximization is taken over probability density functions ρt (x) and probability currents ȷt (x) as in (B.5) and (B.7) with I t replaced by Ît , i.e. formally given in terms of the Dirac delta distribution as ρt (x) = R d ×R d δ x -Ît (x 0 , x 1 ) ρ 0 (x 0 )ρ 1 (x 1 )dx 0 dx 1 ȷt (x) = R d ×R d ∂ t Ît (x 0 , x 1 )δ x -Ît (x 0 , x 1 ) ρ 0 (x 0 )ρ 1 (x 1 )dx 0 dx 1 (D.17) Since this pair automatically satisfies ∂ t ρt + ∇ • ȷt = 0, ρt=0 = ρ 0 , ρt=1 = ρ 1 , (D.18) the max-min problem (D.16) is similar to the one considered in Lemma (D.3), except that the maximization is taken over probability density functions and associated currents that can be written as in (D.17). By Proposition D.1, this class is large enough to represent ρ * t (x) since we have assumed that ρ * t (x) is an interpolable density, and the statement of the proposition follows. Since our primary aim here is to construct a map T = X t=1 that pushes forward ρ 0 onto ρ 1 , but not necessarily to identify the optimal one, we can perform the maximization over Ît (x 0 , x 1 ) in a restricted class (though of course the corresponding map is no longer optimal in that case). We investigate this option in numerical examples in Appendix H, using interpolants of the type (B.2) and maximizing over the functions a t and b t , subject to a 0 = b 1 = 1, a 1 = b 0 = 0. In Section G we also discussion generalizations of the interpolant that can render the optimization of the transport easier to perform. We leave the full investigation of the consequences of Proposition D.2 for future work. Remark D.4 (Optimal interpolant for Gaussian densities). Note that if ρ 0 and ρ 1 are both Gaussian densities with respective mean m 0 ∈ R d and m 1 ∈ R d and the same covariance C ∈ R d × R d , an interpolant of the type D.4 is I t (x 0 , x 1 ) = cos( 1 2 πt)(x 0 -m 0 ) + sin( 1 2 πt)(x 1 -m 1 1) + (1 -t)m 0 + tm 1 , (D.19) and a calculation similar to the one presented in Appendix C shows that the associated velocity field v t (x) is v t (x) = (m 1 -m 0 ) (D.20 ) This is the velocity giving the optimal transport map X t (x) = x + (m 1 -m 0 )t. Remark D.5 (Rectifying the map). In (Liu et al., 2022; Liu, 2022) , where an approach similar to ours using the linear interpolant x t = x 0 (1 -t) + x 1 t is introduced, an alternative procedure is proposed to optimize the transport. Specifically, it is suggested to rectify the map T = X t=1 learned by repeating the procedure using the new interpolant x ′ t = x 0 (1 -t) + T (x 0 )t with x 0 ∼ ρ 0 . As shown in (Liu, 2022) iterating on this rectification step yields successive maps that are getting closer to optimality. The main drawback of this approach is that it requires each of these maps (including the first one) to be learned exactly, i.e. we must have T ♯ρ 0 = ρ 1 to use the interpolant x ′ t above. If the maps are not exact, which is unavoidable in practice, the procedure introduces a bias whose amplitude will grow with the iterations, leading to instability (as noted in Liu et al. (2022) ; Liu (2022)).

D.2 OPTIMIZING THE BASE DENSITY

While our construction allows one to connect any pair of densities ρ 0 and ρ 1 , the typical situation of interest is when ρ 1 is a complex target density and we wish to construct a generative model for this density by transporting samples from a simple base density ρ 0 . In this case it is natural to adjust parameters in ρ 0 to optimize the transport via maximization of G(v) = min v G(v) over these parameters-indeed changing ρ 0 also affects the stochastic interpolant x t defined in (6), and hence both the interpolant density ρ t (x) and the velocity v t (x). For example we can take ρ 0 to be a Gaussian with mean m ∈ R d and covariance C ∈ R d × R d , and maximize G(v) = min v G(v) over m and C. This construction is tested in the numerical examples treated in Appendix H. We also discuss how to generalize it in Section G. Optimizing ρ 0 only makes practical sense if we do so in a restricted class, like that of Gaussian densities that we just discussed. Still, we may wonder whether optimizing ρ 0 over all densities would automatically give ρ 0 = ρ 1 and v t = 0. If the interpolant is fixed, the answer is no, in general. Indeed, even if we set ρ 0 = ρ 1 , the interpolant density will still evolve, i.e. ρ t ̸ = ρ 0 except at t = 0, 1, in general. This indicates that optimizing the interpolant in concert with ρ 0 is necessary if we want to optimize the transport.

E PROOF OF PROPOSITION 3

To use the bound in ( 27), let us consider the evolution of Q t = R d |X t (x) -Xt (x)| 2 ρ 0 (x)dx (E.1) Using Ẋt (x) = v t (X t (x)) and Ẋt (x) = vt ( Xt (x)), we deduce Qt = 2 R d (X t (x) -Xt (x)) • (v t (X t (x)) -vt ( Xt (x)))ρ 0 (x)dx = 2 R d (X t (x) -Xt (x)) • (v t (X t (x)) -vt (X t (x)))ρ 0 (x)dx + 2 R d (X t (x) -Xt (x)) • (v t (X t (x)) -vt ( Xt (x)))ρ 0 (x)dx (E.2) Now use 2(X t -Xt ) • (v t (X t ) -vt (X t )) ≤ |X t -Xt | 2 + |v t (X t ) -vt (X t )| 2 (E.3) and 2(X t -Xt ) • (v t (X t ) -vt ( Xt )) ≤ 2 K|X t -Xt | 2 (E.4) to obtain Qt ≤ (1 + 2 K)Q t + R d |v t (X t (x)) -vt (X t (x))| 2 ρ 0 (x)dx (E.5) Therefore, by Gronwall's inequality and since Q 0 = 0 we deduce 27), we are done. □ Q 1 ≤ e 1+2 K 1 0 R d |v t (X t (x)) -vt (X t (x))| 2 ρ 0 (x)dxdt = e 1+2 K H(v). (E.6) Since W 2 2 (ρ 1 , ρ1 ) ≤ Q 1 by ( Published as a conference paper at ICLR 2023 Note that the proposition suggests to regularize G(v) using e.g. G λ (v) = G(v) + λ 1 0 R d ∥∇v t (x)∥ 2 ρ t (x)dxdt = G(v) + λE ∥∇v t (I t (x 0 , x 1 )∥ 2 (E.7) with some small λ > 0. In the numerical results presented in the paper no such regularization was included. F PROOF OF PROPOSITION 4 AND LINK WITH SCORE-BASED DIFFUSION

MODELS

Assume that the interpolant is of the type (B.2) so that ∂ t I t (x 0 , x 1 ) = ȧt x 0 + ḃt x 1 . For t ∈ (0, 1) let us write expression ( 14) for the probability current as j t (x) = R d ×R d ( ȧt x 0 + ḃt x 1 )δ(x -a t x 0 + b t x 1 )ρ 0 (x 0 )ρ 1 (x 1 )dx 0 dx 1 = R d ×R d ḃt b t (a t x 0 + b t x 1 ) + ȧt - ḃt b t a t x 0 δ(x -a t x 0 + b t x 1 )ρ 0 (x 0 )ρ 1 (x 1 )dx 0 dx 1 = ḃt b t xρ t (x) + ȧt - ḃt b t a t R d ×R d x 0 δ(x -a t x 0 + b t x 1 )ρ 0 (x 0 )ρ 1 (x 1 )dx 0 dx 1 (F.1) If ρ 0 (x 0 ) = (2π) -d/2 e -1 2 |x0| 2 , we have the identity x 0 ρ 0 (x 0 ) = -∇ x0 ρ 0 (x 0 ). Inserting this equality in the last integral in (F.1) and integrating by part using ∇ x0 δ(x -a t x 0 + b t x 1 ) = -a t ∇ x δ(x -a t x 0 + b t x 1 ) (F.2) gives j t (x) = ḃt b t xρ t (x) -a t ȧt - ḃt b t a t ∇ρ t (x) (F.3) This means that v t (x) = ḃt b t x -a t ȧt - ḃt b t a t ∇ log ρ t (x) (F.4) Solving this expression in ∇ log ρ t (x) and specializing it to the trigonometric interpolant with a t , b t given in (B.4) gives the first equation in ( 28). The second one can be obtained by taking the limit of this first equation using v t=1 (x) = 0 from (B.20) and l'Hôpital's rule. □ Note that (F.3) shows that, when the interpolant is of the type (B.2) and ρ 0 (x 0 ) = (2π) -d/2 e -1 2 |x0| 2 , the continuity equation (3) can also be written as the diffusion equation ∂ t ρ t (x) + ḃt b t ∇ • (xρ t (x)) = a t ȧt - ḃt b t a t ∆ρ t (x) (F.5) Since we assume that ȧt ≤ 0 and ḃt ≥ 0 (see (B.3), the diffusion coefficient in this equation is negative a t ȧt - ḃt b t a t ≤ 0 (F.6) This means that (F.5) is well-posed backward in time, i.e. it corresponds to backward diffusion from ρ t=1 = ρ 1 to ρ t=0 = ρ 0 = (2π) -d/2 e -1 2 |x0| 2 . Therefore, reversing this backward diffusion, similar to what is done in score-based diffusion models, gives an SDE that transforms samples from ρ 0 to ρ 1 . Interestingly, these forward and backward diffusion processes arise on the finite time interval t ∈ [0, 1]; notice however that both the drift and the diffusion coefficient are singular at t = 1. This is unlike the velocity v t (x) which is finite at t = 0, 1 and is given by (B.19).

G GENERALIZED INTERPOLANTS

Our construction can be easily generalized in various ways, e.g. by making the interpolant depend on additional latent variables to be averaged upon. This enlarges the class of interpolant density we can construct, which may prove useful to get simpler (or more optimal) velocity fields in the continuity equation (3). Let us consider one specific generalization of this type: Factorized interpolants. Suppose that we decompose both ρ 0 and ρ 1 as ρ 0 (x) = K k=1 p k ρ k 0 (x), ρ 1 (x) = K k=1 p k ρ k 1 (x), (G.1) where K ∈ N, ρ k 0 and ρ k 1 are normalized PDF for each k = 1, . . . , K, and p k > 0 with K k=1 p k = 1. We can then introduce K interpolants I k t (x 0 , x 1 ) with k = 1, . . . , K, each satisfying (4), and define the stochastic interpolant x t = I k t (x 0 , x 1 ), k ∼ p k , x 0 ∼ ρ k 0 , x 1 ∼ ρ k 1 (G.2) This corresponds to splitting the samples from ρ 0 and ρ 1 into K (soft) clusters, and only interpolating between samples in cluster k in ρ 0 and samples in cluster k in ρ 1 . This clustering can either be done beforehand, based on some prior information we may have about ρ 0 and ρ 1 , or be learned (more on this point below). It is easy to see that the PDF ρ t of x t is formally given by ρ t (x) = K k=1 p k R d ×R d δ(x -I k t (x 0 , x 1 ))ρ k 0 (x 0 )ρ k 1 (x 1 )dx 0 dx 1 , (G.3) and that this density satisfies the continuity equation (B.11) for the current j t (x) = K k=1 p k R d ×R d ∂ t I k t (x 0 , x 1 )δ(x -I k t (x 0 , x 1 ))ρ k 0 (x 0 )ρ k 1 (x 1 )dx 0 dx 1 , (G.4) Therefore this equation can be written as (3) with the velocity v t (x) which is the unique minimizer of a generalization of the objective (9). We state this as: Proposition G.1. The stochastic interpolant x t defined in (G.2) with I k t (x 0 , x 1 ) satisfying (4) for each k = 1, . . . , K has a probability density ρ t (x) that satisfies the continuity equation (3) with a velocity v t (x) which is the unique minimizer over vt (x) of the objective G K (v) = K k=1 p k 1 0 R d ×R d |v t (I k t )| 2 -2∂ t I k t • vt (I k t )) ρ k 0 (x 0 )ρ k 1 (x 1 )dx 0 dx 1 dt (G.5) where we used the shorthand notations I k t = I k t (x 0 , x 1 ) and ∂ t I k t = ∂ t I k t (x 0 , x 1 ). In addition the minimum value of this objective is given by G K (v) = - 1 0 R d |v t (x)| 2 ρ t (x)dxdt > -∞ (G.6) and both these statements remain true if G(v) is minimized over velocities that are gradient fields, in which case the minimizer is of the form v t (x) = ∇ϕ t (x) for some potential ϕ t : R d → R uniquely defined up to a constant. We omit the proof of this proposition since it is similar to the one of Proposition 1. The advantage of this construction is that it gives us the option to make the transport more optimal by maximizing G K (v) over I k t and/or the partitioning used to define ρ k 0 , ρ k 1 , and p k . For example, if we know that the target density ρ 1 has K clusters with relative mass p k , we can define ρ 0 as a Gaussian mixture with K modes, set the weight of mode k to p k , and maximize G K (v) = min v G K (v) over the mean m k ∈ R d and and the covariance C k ∈ R d × R d of each mode k = 1, . . . , K in the mixture density ρ 0 . H EXPERIMENTS FOR OPTIMAL TRANSPORT, PARAMETERIZING I t , PARAMETERIZING ρ 0 As discussed in Section 2, the minimizer of the objective in equation ( 10) can be maximized with respect to the interpolant I t as a route toward optimal transport. We motivate this by choosing a parametric class for the interpolant I t and demonstrating that the max-min optimization in equation (D.10) can give rise to velocity fields which are easier to learn, resulting in better likelihood estimation. The 2D checkerboard example is an appealing test case because the transport is nontrivial. In this case, we train the same flow as in Section 3.1, with and without optimizing the interpolant. We choose a simple parametric class for the interpolant given by a Fourier series expansion Ît (x 0 , x 1 ) = ât x 0 + bt x 1 (H.1) where parameters {α, β} M m=1 define learnable ât , bt via ât = cos 1 2 πt + 1 M M m=1 α m sin(mπt), bt = sin 1 2 πt + 1 M M m=1 β m sin(mπt). (H.2) This is but one set of possible parameterizations. Another, for example, could be a rational quadratic spline, so that the endpoints for ât , bt can be properly constrained as they are above in the Fourier expansion. In Figure H .1, we plot the log likelihood, the learned interpolants ât , bt compared to their initializations, as well as the path length as it evolves over training epochs. With the learned interpolants, the path length is reduced, and the resultant velocity field under the same number of training epochs endows a model with better likelihood estimation, as shown in the left plot of the figure. For the path optimality experiments, M = 7 Fourier coefficients were used to parameterize the interpolant. This suggests that minimizing the transport cost can create models which are, practically, easier to learn. In addition to parameterizing the interpolant, one can also parameterize the base density ρ 0 in some simple parametric class. We show that including this in the min-max optimization given in equation (D.10) can further reduce the transport cost and improve the final log likelihood. The results are given in Figure H.2. We train an interpolant just as described above, but also allow the base density ρ 0 to be parameterized as a Gaussian with mean μ and covariance Σ. The inclusion of the learnable base density results in a significantly reduced path length, thereby bringing the model closer to optimal transport.

I IMPLEMENTATION DETAILS FOR NUMERICAL EXPERIMENTS

Let {x i 0 } N i=1 be N samples from the base density ρ 0 , {x j 1 } n i=1 n samples from the target density ρ 1 , and {t k } K k=1 K samples from the uniform density on [0, 1]. Then an empirical estimate of the objective function in ( 9) is given by G N,n,K (v) = 1 KnN K k=1 n j=1 N i=1 vt k I t k (x i 0 , x j 1 )) 2 -2∂ t I t k (x i 0 , x j 1 ) • vt k (I t k (x i 0 , x j 1 )). (I.1) This calculation is parallelizable. (1.0,0.5) (1.0,0.5) (1.0,0.5) (1.0,1.0) (1.0,0.7) Table 3 : Hyperparameters and architecture for tabular datasets. The architectural information and hyperparameters of the models for the resulting likelihoods in Table 2 is presented in Table 3 . ReLU (Nair & Hinton, 2010) activations were used throughout, barring the BSDS300 dataset, where ELU (Clevert et al., 2016) was used. Table formatting based on (Durkan et al., 2019) . In addition, reweighting of the uniform sampling of time values in the empirical calculation of (I.1) was done using a Beta distribution under the heuristic that the flow should be well trained near the target. This is in line with the statements under Proposition 1 that any weight w(t) maintains the same minimizer. The details for the image datasets are provided in Table 4 . We built our models based off of the U-Net implementation provided by lucidrains public diffusion code, which we are grateful for https://github.com/lucidrains/denoising-diffusion-pytorch. We use the sinusoidal time embedding, but otherwise use the default implementation other than changing the U-Net dimension multipliers, which are provided in the table. Like in the tabular case, we reweight the time sampling to be from a beta distribution. All models were implemented on a single A100 GPU.

I.1 DETAILS ON COMPUTATIONAL EFFICIENCY AND DEMONSTRATION OF CONVERGENCE GUARANTEE

Below, we show that the results achieved in 3 are driven by a model that can train significantly more efficiently than the maximum likelihood approach to ODEs. Following that, we provide an illustration of the convergence requirements on the objective defined in (11 We briefly describe the experimental setup for testing the computational efficiency of our model as compared to the FFJORD maximum likelihood learning method. We use the same network architectures for the interpolant flow and the FFJORD implementation, taking the architectures used in that paper. For the Gaussian this is a 3 layer neural network with hidden widths of 64 units; for the 43-dimensional MiniBooNE target, this is a 3 layer neural network with hidden widths of 860 units. The right side of Figure I.1 shows that, under equivalent conditions, the interpolant flow can converge faster in number of training steps, in addition to being cheaper per step. The extent of this benefit is dependent on both hyperparameters and the dataset, so a general statement about convergence speeds is difficult to make. For the sake of this comparison, we averaged over 5 trials for each model and dataset, with variance shown shaded around the curves. As described in Section 1, the minimum of G(v) is bounded by the square of the path taken by the map X t (x). The shifted value of the objective G(v) in ( 11) can be tracked to ensure that the model velocity vt (x) meets the requirement of the objective. It must be the case that G(v) = 0 if vt (x) is taken to be the minimizer of G(v), so we can look for this signature during the training of the interpolant flow. Figure I.2 displays this phenomenon for an interpolant flow trained on the POWER dataset. Here, the shifted loss converges to the target G(v) = 0 and remains there throughout training. This suggests that the dynamics of the stochastic optimization of G(v) are dual to the squared path length of the map. 



Kingma & Dhariwal, 2018), as well as to recent advances in score based diffusion in DDPM, DDPM++, VDM, Score SDE, ScoreFlow, and Soft Truncation(Ho et al., 2020;Nichol & Dhariwal, 2021; Kingma et al., 2021; Song et al., 2021b;a;Kim et al., 2022). We present results without data augmentation. Our models emit likelihoods, measured in bits per dim, that are competitive with diffusions on both datasets with a NLL of 2.99 and 3.45. Measures of FID are proximal to those from diffusions, though slightly behind the best results. We note however that this is a first example of this type of model, and has not been optimized with the training tricks that appear in many of the recent works on diffusions, like exponential moving averages, truncations, learning rate warm-ups, and the like. To demonstrate efficiency on larger domains, we train on the Oxford flowers dataset(Nilsback & Zisserman, 2006), which are images of resolution 128×128. We show example generated images in Figure3.4 DISCUSSION, CHALLENGES, AND FUTURE WORK



Figure 1: The density ρ t (x) produced by the stochastic interpolant based on (5) between a standard Gaussian density and a Gaussian mixture density with three modes. Also shows in white are the flow lines of the map X t (x) our method produces.

Discrete and continuous time flows. The first success of normalizing flows with neural network parametrizations follow the work of Tabak & Turner (2013) with a finite set of steps along the map. By imposing structure on the transformation so that it remains an efficiently invertible diffeomorphism, the models of Rezende & Mohamed (2015); Dinh et al. (2017); Huang et al. (2018); Durkan et al. (

Figure 2: Left: 2-D density estimation. Right: Learning a flow map between densities when neither are analytically known.

Figure 3: Left: InterFlow samples training on 128×128 flowers dataset. Right: Samples from flow trained on ImageNet-32×32 (top) and CIFAR-10 (bottom).

Figure H.1: Comparison of characteristics and performance of the learned vs nonparametric interpolant on the checkerboard density estimation. Left: Comparison of the evolution of the loglikelihood. Middle: The initial versus learned interpolant terms ât , bt . Right: The path length of the learned vs nonparametric interpolant.

Figure H.2: Comparison of characteristics and performance of the learned vs nonparametric interpolant on the checkerboard density estimation, while also optimizing the base density ρ0 . The parametric ρ0 is given as bivariate Gaussian N (μ, Σ). Left: Comparison of the evolution of the loglikelihood. Center left: The initial versus learned interpolant terms ât , bt . Center right: The learned covariance Σ in red compared to the original identity covariance matrix. Right: The path length of the learned vs nonparametric interpolant..

Figure I.1: Left: Training speed for ours vs. MLE ODE flows, with 400x speedup on the MiniBooNE. Right: InterFlow shows more efficient likelihood ascent.

Figure I.1 shows a comparison of both the cost per training epoch and the convergence of the log likelihood across epochs. We take the architecture of the vector field as defined in the FFJORD paper for the 2-dimensional Gaussian and MiniBooNE, and use it to define the vector field for the interpolant flow. The left side of Figure I.1 shows that the cost per iteration is constant for the interpolant flow, while it grows for MLE based approaches as the ODE gets more complicated to solve. The speedup grows with dimension, 400x on MiniBooNE.

Figure I.2: Demonstration of the convergence diagnostic on POWER dataset. This is necessary but not sufficient for convergence. See Section 2 for definition of G.

Figure I.3: Uncurated samples from Oxford Flowers 128x128 model.

Figure I.4: Uncurated samples from ImageNet 32×32 model.

Figure I.5: Uncurated samples from CIFAR-10 model.

Left: Negative log likelihoods (NLL) computed on test data unseen during training (lower is better). Values of MADE, Real NVP, and Glow quoted from the FFJORD paper. Values of OT-Flow, CPF, and NSP quoted from their respective publications. Right: NLL and FID scores on unconditional image generation tasks for recent advanced models that emit a likelihood. flow is parameterized by a simple feed forward neural network with ReLU Nair & Hinton (2010) activation functions. The network for each model has 3 layers, each of width equal to 256 hidden units.

2 and this quantity is finite by assumption we deduce that lim K→∞ E ϕ K t (I t )|v t (I t )| 2 exists and is bounded by E |∂ t I t | 2 . Lemma B.2 implies that the objective H(v) in (17) is well-defined, and the second part of statement of Proposition 1 follows from the argument given after (17). For the third part of the proposition we can then proceed as explained in main text, starting from the Poisson equation (B.21). Remark B.3. Let us show on a simple example that the inequality E |v t (I t (x 0 , x 1 ))| 2 ≤ E |∂ t I t (x 0 , x 1 )| 2 is not saturated in general. Assume that ρ 0 (x) is a Gaussian density of mean zero and variance one, and ρ

). Hyperparameters and architecture for image datasets.

ACKNOWLEDGMENTS

We thank Gérard Ben Arous, Nick Boffi, Kyle Cranmer, Michael Lindsey, Jonathan Niles-Weed, Esteban Tabak for helpful discussions about transport. MSA is supported by the National Science Foundation under the award PHY-2141336. MSA is grateful for the hospitality of the Center for Computational Quantum Physics at the Flatiron Institute. The Flatiron Institute is a division of the Simons Foundation. EVE is supported by the National Science Foundation under awards DMR-1420073, DMS-2012510, and DMS-2134216, by the Simons Collaboration on Wave Turbulence, Grant No. 617006, and by a Vannevar Bush Faculty Fellowship.

A BACKGROUND ON TRANSPORT MAPS AND THE CONTINUITY EQUATION

The following result is standard and can be found e.g. in (Villani, 2009; Santambrogio, 2015) Proposition A.1. Let ρ t (x) satisfy the continuity equation(A.1) Assume that v t (x) is C 1 in both t and x for t ≥ 0 and globally Lipschitz in x. Then, given any t, t ′ ≥ 0, the solution of (A.1) satisfieswhere X s,t is the probability flow solution toIn addition, given any test function ϕ : Ω → R, we haveIn words, Lemma A.1 states that an evaluation of the PDF ρ t at a given point x may be obtained by evolving the probability flow equation (2) backwards to some earlier time t ′ to find the point x ′ that evolves to x at time t, assuming that ρ t ′ (x ′ ) is available. In particular, for t ′ = 0, we obtainandProof. The assumed C 1 and globally Lipschitz conditions on v t guarantee global existence (on t ≥ 0) and uniqueness of the solution to (2). Differentiating ρ t (X t ′ ,t (x)) with respect to t and using (2) and (A.1) we deduce= ∂ t ρ t (X t ′ ,t (x)) + v t (X t ′ ,t (x)) • ∇ρ t (X t ′ ,t (x)) = -∇ • v t (X t ′ ,t (x)) ρ t (X t ′ ,t (x))(A.7)

