FLOW MATCHING FOR GENERATIVE MODELING

Abstract

We introduce a new paradigm for generative modeling built on Continuous Normalizing Flows (CNFs), allowing us to train CNFs at unprecedented scale. Specifically, we present the notion of Flow Matching (FM), a simulation-free approach for training CNFs based on regressing vector fields of fixed conditional probability paths. Flow Matching is compatible with a general family of Gaussian probability paths for transforming between noise and data samples-which subsumes existing diffusion paths as specific instances. Interestingly, we find that employing FM with diffusion paths results in a more robust and stable alternative for training diffusion models. Furthermore, Flow Matching opens the door to training CNFs with other, non-diffusion probability paths. An instance of particular interest is using Optimal Transport (OT) displacement interpolation to define the conditional probability paths. These paths are more efficient than diffusion paths, provide faster training and sampling, and result in better generalization. Training CNFs using Flow Matching on ImageNet leads to consistently better performance than alternative diffusion-based methods in terms of both likelihood and sample quality, and allows fast and reliable sample generation using off-the-shelf numerical ODE solvers.

1. INTRODUCTION

Deep generative models are a class of deep learning algorithms aimed at estimating and sampling from an unknown data distribution. The recent influx of amazing advances in generative modeling, e.g., for image generation Ramesh et al. (2022) ; Rombach et al. (2022) , is mostly facilitated by the scalable and relatively stable training of diffusion-based models Ho et al. (2020) ; Song et al. (2020b) . However, the restriction to simple diffusion processes leads to a rather confined space of sampling probability paths, resulting in very long training times and the need to adopt specialized methods (e.g., Song et al. (2020a) ; Zhang & Chen (2022) ) for efficient sampling. In this work we consider the general and deterministic framework of Continuous Normalizing Flows (CNFs; Chen et al. ( 2018)). CNFs are capable of modeling arbitrary probability path and are in particular known to encompass the probability paths modeled by diffusion processes (Song et al., 2021) . Figure 1 : Unconditional ImageNet-128 samples of a CNF trained using Flow Matching with Optimal Transport probability paths. However, aside from diffusion that can be trained efficiently via, e.g., denoising score matching (Vincent, 2011) , no scalable CNF training algorithms are known. Indeed, maximum likelihood training (e.g., Grathwohl et al. (2018) ) require expensive numerical ODE simulations, while existing simulation-free methods either involve intractable integrals (Rozen et al., 2021) or biased gradients (Ben-Hamu et al., 2022) . The goal of this work is to propose Flow Matching (FM), an efficient simulation-free approach to training CNF models, allowing the adoption of general probability paths to supervise CNF training. Importantly, FM breaks the barriers for scalable CNF training beyond diffusion, and sidesteps the need to reason about diffusion processes to directly work with probability paths. In particular, we propose the Flow Matching objective (Section 3), a simple and intuitive training objective to regress onto a target vector field that generates a desired probability path. We first show that we can construct such target vector fields through per-example (i.e., conditional) formulations. Then, inspired by denoising score matching, we show that a per-example training objective, termed Conditional Flow Matching (CFM), provides equivalent gradients and does not require explicit knowledge of the intractable target vector field. Furthermore, we discuss a general family of per-example probability paths (Section 4) that can be used for Flow Matching, which subsumes existing diffusion paths as special instances. Even on diffusion paths, we find that using FM provides more robust and stable training, and achieves superior performance compared to score matching. Furthermore, this family of probability paths also includes a particularly interesting case: the vector field that corresponds to an Optimal Transport (OT) displacement interpolant (McCann, 1997) . We find that conditional OT paths are simpler than diffusion paths, forming straight line trajectories whereas diffusion paths result in curved paths. These properties seem to empirically translate to faster training, faster generation, and better performance. We empirically validate Flow Matching and the construction via Optimal Transport paths on Im-ageNet, a large and highly diverse image dataset. We find that we can easily train models to achieve favorable performance in both likelihood estimation and sample quality amongst competing diffusion-based methods. Furthermore, we find that our models produce better trade-offs between computational cost and sample quality compared to prior methods. Figure 1 depicts selected unconditional ImageNet 128×128 samples from our model.

2. PRELIMINARIES: CONTINUOUS NORMALIZING FLOWS

Let R d denote the data space with data points x = (xfoot_0 , . . . , x d ) ∈ R d . Two important objects we use in this paper are: the probability density path p : [0, 1] × R d → R >0 , which is a time dependent 1 probability density function, i.e., p t (x)dx = 1, and a time-dependent vector field, v : [0, 1] × R d → R d . A vector field v t can be used to construct a time-dependent diffeomorphic map, called a flow, ϕ : [0, 1] × R d → R d , defined via the ordinary differential equation (ODE): d dt ϕ t (x) = v t (ϕ t (x)) (1) ϕ 0 (x) = x (2) Previously, Chen et al. (2018) suggested modeling the vector field v t with a neural network, v t (x; θ), where θ ∈ R p are its learnable parameters, which in turn leads to a deep parametric model of the flow ϕ t , called a Continuous Normalizing Flow (CNF). A CNF is used to reshape a simple prior density p 0 (e.g., pure noise) to a more complicated one, p 1 , via the push-forward equation p t = [ϕ t ] * p 0 where the push-forward (or change of variables) operator * is defined by [ϕ t ] * p 0 (x) = p 0 (ϕ -1 t (x)) det ∂ϕ -1 t ∂x (x) . A vector field v t is said to generate a probability density path p t if its flow ϕ t satisfies equation 3. One practical way to test if a vector field generates a probability path is using the continuity equation, which is a key component in our proofs, see Appendix A. We recap more information on CNFs, in particular how to compute the probability p 1 (x) at an arbitrary point x ∈ R d in Appendix C.

3. FLOW MATCHING

Let x 1 denote a random variable distributed according to some unknown data distribution q(x 1 ). We assume we only have access to data samples from q(x 1 ) but have no access to the density function itself. Furthermore, we let p t be a probability path such that p 0 = p is a simple distribution, e.g., the standard normal distribution p(x) = N (x|0, I), and let p 1 be approximately equal in distribution to q. We will later discuss how to construct such a path. The Flow Matching objective is then designed to match this target probability path, which will allow us to flow from p 0 to p 1 . Given a target probability density path p t (x) and a corresponding vector field u t (x), which generates p t (x), we define the Flow Matching (FM) objective as L FM (θ) = E t,pt(x) ∥v t (x) -u t (x)∥ 2 , where θ denotes the learnable parameters of the CNF vector field v t (as defined in Section 2), t ∼ U[0, 1] (uniform distribution), and x ∼ p t (x). Simply put, the FM loss regresses the vector field u t with a neural network v t . Upon reaching zero loss, the learned CNF model will generate p t (x). Flow Matching is a simple and attractive objective, but naïvely on its own, it is intractable to use in practice since we have no prior knowledge for what an appropriate p t and u t are. There are many choices of probability paths that can satisfy p 1 (x) ≈ q(x), and more importantly, we generally don't have access to a closed form u t that generates the desired p t . In this section, we show that we can construct both p t and u t using probability paths and vector fields that are only defined per sample, and an appropriate method of aggregation provides the desired p t and u t . Furthermore, this construction allows us to create a much more tractable objective for Flow Matching.

3.1. CONSTRUCTING p t , u t FROM CONDITIONAL PROBABILITY PATHS AND VECTOR FIELDS

A simple way to construct a target probability path is via a mixture of simpler probability paths: Given a particular data sample x 1 we denote by p t (x|x 1 ) a conditional probability path such that it satisfies p 0 (x|x 1 ) = p(x) at time t = 0, and we design p 1 (x|x 1 ) at t = 1 to be a distribution concentrated around x = x 1 , e.g., p 1 (x|x 1 ) = N (x|x 1 , σ 2 I), a normal distribution with x 1 mean and a sufficiently small standard deviation σ > 0. Marginalizing the conditional probability paths over q(x 1 ) give rise to the marginal probability path p t (x) = p t (x|x 1 )q(x 1 )dx 1 , where in particular at time t = 1, the marginal probability p 1 is a mixture distribution that closely approximates the data distribution q, p 1 (x) = p 1 (x|x 1 )q(x 1 )dx 1 ≈ q(x). Interestingly, we can also define a marginal vector field, by "marginalizing" over the conditional vector fields in the following sense (we assume p t (x) > 0 for all t and x): u t (x) = u t (x|x 1 ) p t (x|x 1 )q(x 1 ) p t (x) dx 1 , where u t (•|x 1 ) : R d → R d is a conditional vector field that generates p t (•|x 1 ). It may not seem apparent, but this way of aggregating the conditional vector fields actually results in the correct vector field for modeling the marginal probability path. Our first key observation is this: The marginal vector field (equation 8) generates the marginal probability path (equation 6). This provides a surprising connection between the conditional VFs (those that generate conditional probability paths) and the marginal VF (those that generate the marginal probability path). This connection allows us to break down the unknown and intractable marginal VF into simpler conditional VFs, which are much simpler to define as these only depend on a single data sample. We formalize this in the following theorem. Theorem 1. Given vector fields u t (x|x 1 ) that generate conditional probability paths p t (x|x 1 ), for any distribution q(x 1 ), the marginal vector field u t in equation 8 generates the marginal probability path p t in equation 6, i.e., u t and p t satisfy the continuity equation (equation 25). The full proofs for our theorems are all provided in Appendix B. Theorem 1 can also be derived from the Diffusion Mixture Representation Theorem in Peluchetti (2021) that provides a formula for the marginal drift and diffusion coefficients in diffusion SDEs.

3.2. CONDITIONAL FLOW MATCHING

Unfortunately, due to the intractable integrals in the definitions of the marginal probability path and VF (equations 6 and 8), it is still intractable to compute u t , and consequently, intractable to naïvely compute an unbiased estimator of the original Flow Matching objective. Instead, we propose a simpler objective, which surprisingly will result in the same optima as the original objective. Specifically, we consider the Conditional Flow Matching (CFM) objective, L CFM (θ) = E t,q(x1),pt(x|x1) v t (x) -u t (x|x 1 ) 2 , ( ) where t ∼ U[0, 1], x 1 ∼ q(x 1 ), and now x ∼ p t (x|x 1 ). Unlike the FM objective, the CFM objective allows us to easily sample unbiased estimates as long as we can efficiently sample from p t (x|x 1 ) and compute u t (x|x 1 ), both of which can be easily done as they are defined on a per-sample basis. Our second key observation is therefore: The FM (equation 5) and CFM (equation 9) objectives have identical gradients w.r.t. θ. That is, optimizing the CFM objective is equivalent (in expectation) to optimizing the FM objective. Consequently, this allows us to train a CNF to generate the marginal probability path p t -which in particular, approximates the unknown data distribution q at t=1without ever needing access to either the marginal probability path or the marginal vector field. We simply need to design suitable conditional probability paths and vector fields. We formalize this property in the following theorem. Theorem 2. Assuming that p t (x) > 0 for all x ∈ R d and t ∈ [0, 1], then, up to a constant independent of θ, L CFM and L FM are equal. Hence, ∇ θ L FM (θ) = ∇ θ L CFM (θ).

4. CONDITIONAL PROBABILITY PATHS AND VECTOR FIELDS

The Conditional Flow Matching objective works with any choice of conditional probability path and conditional vector fields. In this section, we discuss the construction of p t (x|x 1 ) and u t (x|x 1 ) for a general family of Gaussian conditional probability paths. Namely, we consider conditional probability paths of the form p t (x|x 1 ) = N (x | µ t (x 1 ), σ t (x 1 ) 2 I), (10) where µ : [0, 1] × R d → R d is the time-dependent mean of the Gaussian distribution, while σ : [0, 1] × R → R >0 describes a time-dependent scalar standard deviation (std). We set µ 0 (x 1 ) = 0 and σ 0 (x 1 ) = 1, so that all conditional probability paths converge to the same standard Gaussian noise distribution at t = 0, p(x) = N (x|0, I). We then set µ 1 (x 1 ) = x 1 and σ 1 (x 1 ) = σ min , which is set sufficiently small so that p 1 (x|x 1 ) is a concentrated Gaussian distribution centered at x 1 . There is an infinite number of vector fields that generate any particular probability path (e.g., by adding a divergence free component to the continuity equation, see equation 25), but the vast majority of these is due to the presence of components that leave the underlying distribution invariant-for instance, rotational components when the distribution is rotation-invariant-leading to unnecessary extra compute. We decide to use the simplest vector field corresponding to a canonical transformation for Gaussian distributions. Specifically, consider the flow (conditioned on x 1 ) ψ t (x) = σ t (x 1 )x + µ t (x 1 ). (11) When x is distributed as a standard Gaussian, ψ t (x) is the affine transformation that maps to a normally-distributed random variable with mean µ t (x 1 ) and std σ t (x 1 ). That is to say, according to equation 4, ψ t pushes the noise distribution p 0 (x|x 1 ) = p(x) to p t (x|x 1 ), i.e., [ψ t ] * p(x) = p t (x|x 1 ). (12) This flow then provides a vector field that generates the conditional probability path: d dt ψ t (x) = u t (ψ t (x)|x 1 ). ( ) Reparameterizing p t (x|x 1 ) in terms of just x 0 and plugging equation 13 in the CFM loss we get L CFM (θ) = E t,q(x1),p(x0) v t (ψ t (x 0 )) - d dt ψ t (x 0 ) 2 . ( ) Since ψ t is a simple (invertible) affine map we can use equation 13 to solve for u t in a closed form. Let f ′ denote the derivative with respect to time, i.e., f ′ = d dt f , for a time-dependent function f . Theorem 3. Let p t (x|x 1 ) be a Gaussian probability path as in equation 10, and ψ t its corresponding flow map as in equation 11. Then, the unique vector field that defines ψ t has the form: u t (x|x 1 ) = σ ′ t (x 1 ) σ t (x 1 ) (x -µ t (x 1 )) + µ ′ t (x 1 ). Consequently, u t (x|x 1 ) generates the Gaussian path p t (x|x 1 ).

4.1. SPECIAL INSTANCES OF GAUSSIAN CONDITIONAL PROBABILITY PATHS

Our formulation is fully general for arbitrary functions µ t (x 1 ) and σ t (x 1 ), and we can set them to any differentiable function satisfying the desired boundary conditions. We first discuss the special cases that recover probability paths corresponding to previously-used diffusion processes. Since we directly work with probability paths, we can simply depart from reasoning about diffusion processes altogether. Therefore, in the second example below, we directly formulate a probability path based on the Wasserstein-2 optimal transport solution as an interesting instance. Example I: Diffusion conditional VFs. Diffusion models start with data points and gradually add noise until it approximates pure noise. These can be formulated as stochastic processes, which have strict requirements in order to obtain closed form representation at arbitrary times t, resulting in Gaussian conditional probability paths p t (x|x 1 ) with specific choices of mean µ t (x 1 ) and std -Dickstein et al., 2015; Ho et al., 2020; Song et al., 2020b) . For example, the reversed (noise→data) Variance Exploding (VE) path has the form σ t (x 1 ) (Sohl p t (x|x 1 ) = N (x|x 1 , σ 2 1-t I), where σ t is an increasing function, σ 0 = 0, and σ 1 ≫ 1. Next, equation 16 provides the choices of µ t (x 1 ) = x 1 and σ t (x 1 ) = σ 1-t . Plugging these into equation 15 of Theorem 3 we get u t (x|x 1 ) = - σ ′ 1-t σ 1-t (x -x 1 ). The reversed (noise→data) Variance Preserving (VP) diffusion path has the form p t (x|x 1 ) = N (x | α 1-t x 1 , 1 -α 2 1-t I), where α t = e -1 2 T (t) , T (t) = t 0 β(s)ds, ( ) and β is the noise scale function. Equation 18 provides the choices of µ t (x 1 ) = α 1-t x 1 and σ t (x 1 ) = 1 -α 2 1-t . Plugging these into equation 15 of Theorem 3 we get u t (x|x 1 ) = α ′ 1-t 1 -α 2 1-t (α 1-t x -x 1 ) = - T ′ (1 -t) 2 e -T (1-t) x -e -1 2 T (1-t) x 1 1 -e -T (1-t) . ( ) Our construction of the conditional VF u t (x|x 1 ) does in fact coincide with the vector field previously used in the deterministic probability flow (Song et al. (2020b) , equation 13) when restricted to these conditional diffusion processes; see details in Appendix D. Nevertheless, combining the diffusion conditional VF with the Flow Matching objective offers an attractive training alternative-which we find to be more stable and robust in our experiments-to existing score matching approaches. Another important observation is that, as these probability paths were previously derived as solutions of diffusion processes, they do not actually reach a true noise distribution in finite time. In practice, p 0 (x) is simply approximated by a suitable Gaussian distribution for sampling and likelihood evaluation. Instead, our construction provides full control over the probability path, and we can just directly set µ t and σ t , as we will do next. Example II: Optimal Transport conditional VFs. An arguably more natural choice for conditional probability paths is to define the mean and the std to simply change linearly in time, i.e., µ t (x) = tx 1 , and σ t (x) = 1 -(1 -σ min )t. According to Theorem 3 this path is generated by the VF u t (x|x 1 ) = x 1 -(1 -σ min )x 1 -(1 -σ min )t , ( ) t = 0.0 t = 1 /3 t = 2 /3 t = 1.0 Diffusion path -conditional score function t = 0.0 t = 1 /3 t = 2 /3 t = 1.0 OT path -conditional vector field  ψ t (x) = (1 -(1 -σ min )t)x + tx 1 , and in this case, the CFM loss (see equations 9, 14) takes the form: L CFM (θ) = E t,q(x1),p(x0) v t (ψ t (x 0 )) -x 1 -(1 -σ min )x 0 2 . ( ) Allowing the mean and std to change linearly not only leads to simple and intuitive paths, but it is actually also optimal in the following sense. The conditional flow ψ t (x) is in fact the Optimal Transport (OT) displacement map between the two Gaussians p 0 (x|x 1 ) and p 1 (x|x 1 ). The OT interpolant, which is a probability path, is defined to be (see Definition 1.1 in McCann (1997)): p t = [(1 -t)id + tψ] ⋆ p 0 ( ) where ψ : R d → R d is the OT map pushing p 0 to p 1 , id denotes the identity map, i.e., id(x) = x, and (1 -t)id + tψ is called the OT displacement map. Example 1.7 in McCann (1997) shows, that in our case of two Gaussians where the first is a standard one, the OT displacement map takes the form of equation 22. Originally, CNFs are trained with the maximum likelihood objective, but this involves expensive ODE simulations for the forward and backward propagation, resulting in high time complexity due to the sequential nature of ODE simulations. Although some works demonstrated the capability of CNF generative models for image synthesis (Grathwohl et al., 2018) , scaling up to very high dimensional images is inherently difficult. A number of works attempted to regularize the ODE to be easier to solve, e.g., using augmentation (Dupont et al., 2019) , adding regularization terms (Yang & Karniadakis, 2019; Finlay et al., 2020; Onken et al., 2021; Tong et al., 2020; Kelly et al., 2020) , or stochastically sampling the integration interval (Du et al., 2022) . These works merely aim to regularize the ODE but do not change the fundamental training algorithm. Another approach to simulation-free training relies on the construction of a diffusion process to indirectly define the target probability path (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song & Ermon, 2019) . Song et al. (2020b) shows that diffusion models are trained using denoising score matching (Vincent, 2011) , a conditional objective that provides unbiased gradients with respect to the score matching objective. Conditional Flow Matching draws inspiration from this result, but generalizes to matching vector fields directly. Due to the ease of scalability, diffusion models have received increased attention, producing a variety of improvements such as loss-rescaling (Song et al., 2021) , adding classifier guidance along with architectural improvements (Dhariwal & Nichol, 2021) , and learning the noise schedule (Nichol & Dhariwal, 2021; Kingma et al., 2021) . However, (Nichol & Dhariwal, 2021) and (Kingma et al., 2021) only consider a restricted setting of Gaussian conditional paths defined by simple diffusion processes with a single parameter-in particular, it does not include our conditional OT path. In an another line of works, (De Bortoli et al., 2021; Wang et al., 2021; Peluchetti, 2021) proposed finite time diffusion constructions via diffusion bridges theory resolving the approximation error incurred by infinite time denoising constructions. While existing works make use of a connection between diffusion processes and continuous normalizing flows with the same probability path (Maoutsa et al., 2020b; Song et al., 2020b; 2021) 

6. EXPERIMENTS

We explore the empirical benefits of using Flow Matching on the image datasets of CIFAR-10 ( Krizhevsky et al., 2009) and ImageNet at resolutions 32, 64, and 128 (Chrabaszcz et al., 2017; Deng et al., 2009) . We also ablate the choice of diffusion path in Flow Matching, particularly between the standard variance preserving diffusion path and the optimal transport path. We discuss how sample generation is improved by directly parameterizing the generating vector field and using the Flow Matching objective. Lastly we show Flow Matching can also be used in the conditional generation setting. Unless otherwise specified, we evaluate likelihood and samples from the model using dopri5 (Dormand & Prince, 1980) at absolute and relative tolerances of 1e-5. Generated samples can be found in the Appendix, and all implementation details are in Appendix E. -58.9 PacGAN2 (Lin et al., 2018) -57.5 Logo-GAN-AE (Sage et al., 2018) -50.9 Self-cond. GAN (Lučić et al., 2019) -41.7 Uncond. BigGAN (Lučić et al., 2019) -25.3 PGMGAN (Armandpour et al., 2021) -21.7 FM w / OT 2.90 20.9 2017)), and averaged number of function evaluations (NFE) required for the adaptive solver to reach its a prespecified numerical tolerance, averaged over 50k samples. All models are trained using the same architecture, hyperparameter values and number of training iterations, where baselines are allowed more iterations for better convergence. Note that these are unconditional models. On both CIFAR-10 and ImageNet, FM-OT consistently obtains best results across all our quantitative measures compared to competing methods. We are noticing a higher that usual FID performance in CIFAR-10 compared to previous works (Ho et al., 2020; Song et al., 2020b; 2021) that can possibly be explained by the fact that our used architecture was not optimized for CIFAR-10. Secondly, Table 1 (right) compares a model trained using Flow Matching with the OT path on Ima-geNet at resolution 128×128. Our FID is state-of-the-art with the exception of IC-GAN (Casanova et al., 2021) Sample paths. We first qualitatively visualize the difference in sampling paths between diffusion and OT. Figure 6 shows samples from ImageNet-64 models using identical random seeds, where we find that the OT path model starts generating images sooner than the diffusion path models, where noise dominates the image until the very last time point. We additionally depict the probability density paths in 2D generation of a checkerboard pattern, Figure 4 (left), noticing a similar trend. Low-cost samples. We next switch to fixed-step solvers and compare low (≤100) NFE samples computed with the ImageNet-32 models from Table 1 . In Figure 7 (left), we compare the per-pixel MSE of low NFE solutions compared with 1000 NFE solutions (we use 256 random noise seeds), and notice that the FM with OT model produces the best numerical error, in terms of computational cost, requiring roughly only 60% of the NFEs to reach the same error threshold as diffusion models. Secondly, Figure 7 (right) shows how FID changes as a result of the computational cost, where we find FM with OT is able to achieve decent FID even at very low NFE values, producing better tradeoff between sample quality and cost compared to ablated models. Figure 4 (right) shows low-cost sampling effects for the 2D checkerboard experiment. Lastly, we experimented with Flow Matching for conditional image generation. In particular, upsampling images from 64×64 to 256×256. We follow the evaluation procedure in (Saharia et al., 2022) and compute the FID of the upsampled validation images; baselines include reference (FID of original validation set), and regression. Results are in Table 2 . Upsampled image samples are shown in Figures 14, 15 in the Appendix. FM-OT achieves similar PSNR and SSIM values to (Saharia et al., 2022) while considerably improving on FID and IS, which as argued by (Saharia et al., 2022 ) is a better indication of generation quality.

7. CONCLUSION

We introduced Flow Matching, a new simulation-free framework for training Continuous Normalizing Flow models, relying on conditional constructions to effortlessly scale to very high dimensions. Furthermore, the FM framework provides an alternative view on diffusion models, and suggests forsaking the stochastic/diffusion construction in favor of more directly specifying the probability path, allowing us to, e.g., construct paths that allow faster sampling and/or improve generation. We experimentally showed the ease of training and sampling when using the Flow Matching framework, and in the future, we expect FM to open the door to allowing a multitude of probability paths (e.g., non-isotropic Gaussians or more general kernels altogether).

SOCIAL RESPONSIBILITY

Along side its many positive applications, image generation can also be used for harmful proposes. Using content-controlled training sets and image validation/classification can help reduce these uses. Furthermore, the energy demand for training large deep learning models is increasing at a rapid pace (Amodei et al., 2018; Thompson et al., 2020) , focusing on methods that are able to train using less gradient updates / image throughput can lead to significant time and energy savings.

A THE CONTINUITY EQUATION

One method of testing if a vector field v t generates a probability path p t is the continuity equation (Villani, 2009) . It is a Partial Differential Equation (PDE) providing a necessary and sufficient condition to ensuring that a vector field v t generates p t , d dt p t (x) + div(p t (x)v t (x)) = 0, where the divergence operator, div, is defined with respect to the spatial variable x = (x 1 , . . . , x d ), i.e., div = d i=1 ∂ ∂x i .

B THEOREM PROOFS

Theorem 1. Given vector fields u t (x|x 1 ) that generate conditional probability paths p (x|x 1 ), for any distribution q(x 1 ), the marginal vector field u t in equation 8 generates the marginal probability path p t in equation 6, i.e., u t and p t satisfy the continuity equation (equation 25). Proof. To verify this, we check that p t and u t satisfy the continuity equation (equation 25): d dt p t (x) = d dt p t (x|x 1 ) q(x 1 )dx 1 = -div u t (x|x 1 )p t (x|x 1 ) q(x 1 )dx 1 = -div u t (x|x 1 )p t (x|x 1 )q(x 1 )dx 1 = -div u t (x)p t (x) , where in the second equality we used the fact that u t (•|x 1 ) generates p t (•|x 1 ), in the last equality we used equation 8. Furthermore, the first and third equalities are justified by assuming the integrands satisfy the regularity conditions of the Leibniz Rule (for exchanging integration and differentiation). Theorem 2. Assuming that p t (x) > 0 for all x ∈ R d and t ∈ [0, 1], then, up to a constant independent of θ, L CFM and L FM are equal. Hence, ∇ θ L FM (θ) = ∇ θ L CFM (θ). Proof. To ensure existence of all integrals and to allow the changing of integration order (by Fubini's Theorem) in the following we assume that q(x) and p t (x|x 1 ) are decreasing to zero at a sufficient speed as ∥x∥ → ∞, and that u t , v t , ∇ θ v t are bounded. First, using the standard bilinearity of the 2-norm we have that ∥v t (x) -u t (x)∥ 2 = ∥v t (x)∥ 2 -2 ⟨v t (x), u t (x)⟩ + ∥u t (x)∥ 2 ∥v t (x) -u t (x|x 1 )∥ 2 = ∥v t (x)∥ 2 -2 ⟨v t (x), u t (x|x 1 )⟩ + ∥u t (x|x 1 )∥ 2 Next, remember that u t is independent of θ and note that E pt(x) ∥v t (x)∥ 2 = ∥v t (x)∥ 2 p t (x)dx = ∥v t (x)∥ 2 p t (x|x 1 )q(x 1 )dx 1 dx = E q(x1),pt(x|x1) ∥v t (x)∥ 2 , where in the second equality we use equation 6, and in the third equality we change the order of integration. Next, E pt(x) ⟨v t (x), u t (x)⟩ = v t (x), u t (x|x 1 )p t (x|x 1 )q(x 1 )dx 1 p t (x) p t (x)dx = v t (x), u t (x|x 1 )p t (x|x 1 )q(x 1 )dx 1 dx = ⟨v t (x), u t (x|x 1 )⟩ p t (x|x 1 )q(x 1 )dx 1 dx = E q(x1),pt(x|x1) ⟨v t (x), u t (x|x 1 )⟩ , where in the last equality we change again the order of integration. Theorem 3. Let p t (x|x 1 ) be a Gaussian probability path as in equation 10, and ψ t its corresponding flow map as in equation 11. Then, the unique vector field that defines ψ t has the form: u t (x|x 1 ) = σ ′ t (x 1 ) σ t (x 1 ) (x -µ t (x 1 )) + µ ′ t (x 1 ). Consequently, u t (x|x 1 ) generates the Gaussian path p t (x|x 1 ). Proof. For notational simplicity let w t (x) = u t (x|x 1 ). Now consider equation 1: d dt ψ t (x) = w t (ψ t (x)). Since ψ t is invertible (as σ t (x 1 ) > 0) we let x = ψ -1 (y) and get ψ ′ t (ψ -1 (y)) = w t (y), where we used the apostrophe notation for the derivative to emphasis that ψ ′ t is evaluated at ψ -1 (y). Now, inverting ψ t (x) provides ψ -1 t (y) = y -µ t (x 1 ) σ t (x 1 ) . Differentiating ψ t with respect to t gives ψ ′ t (x) = σ ′ t (x 1 )x + µ ′ t (x 1 ). Plugging these last two equations in equation 26 we get w t (y) = σ ′ t (x 1 ) σ t (x 1 ) (y -µ t (x 1 )) + µ ′ t (x 1 ) as required.

C COMPUTING PROBABILITIES OF THE CNF MODEL

We are given an arbitrary data point x 1 ∈ R d and need to compute the model probability at that point, i.e., p 1 (x 1 ). Below we recap how this can be done covering the basic relevant ODEs, the scaling of the divergence computation, taking into account data transformations (e.g., centering of data), and Bits-Per-Dimension computation. ODE for computing p 1 (x 1 ). The continuity equation with equation 1 lead to the instantaneous change of variable (Chen et al., 2018; Ben-Hamu et al., 2022) : d dt log p t (ϕ t (x)) + div(v t (ϕ t (x)) = 0. Integrating t ∈ [0, 1] gives: log p 1 (ϕ 1 (x)) -log p 0 (ϕ 0 (x)) = - 1 0 div(v t (ϕ t (x)))dt Therefore, the log probability can be computed together with the flow trajectory by solving the ODE: d dt ϕ t (x) f (t) = v t (ϕ t (x)) -div(v t (ϕ t (x))) Given initial conditions ϕ 0 (x) f (0) = x 0 c . ( ) the solution [ϕ t (x), f (t)] T is uniquely defined (up to some mild conditions on the VF v t ). Denote x 1 = ϕ 1 (x), and according to equation 27, f (1) = c + log p 1 (x 1 ) -log p 0 (x 0 ). Now, we are given an arbitrary x 1 and want to compute p 1 (x 1 ). For this end, we will need to solve equation 28 in reverse. That is, d ds ϕ 1-s (x) f (1 -s) = -v 1-s (ϕ 1-s (x)) div(v 1-s (ϕ 1-s (x))) and we solve this equation for s ∈ [0, 1] with the initial conditions at s = 0: ϕ 1 (x) f (1) = x 1 0 . ( ) From uniqueness of ODEs, the solution will be identical to the solution of equation 28 with initial conditions in equation 29 where c = log p 0 (x 0 ) -log p 1 (x 1 ). This can be seen from equation 30 and setting f (1) = 0. Therefore we get that f (0) = log p 0 (x 0 ) -log p 1 (x 1 ) and consequently log p 1 (x 1 ) = log p 0 (x 0 ) -f (0). To summarize, to compute p 1 (x 1 ) we first solve the ODE in equation 31 with initial conditions in equation 32, and the compute equation 33. Unbiased estimator to p 1 (x 1 ). Solving equation 31 requires computation of div of VFs in R d which is costly. Grathwohl et al. (2018) suggest to replace the divergence by the (unbiased) Hutchinson trace estimator, d ds ϕ 1-s (x) f (1 -s) = -v 1-s (ϕ 1-s (x)) z T Dv 1-s (ϕ 1-s (x))z , where z ∈ R d is a sample from a random variable such that Ezz T = I. Solving the ODE in equation 34 exactly (in practice, with a small controlled error) with initial conditions in equation 32 leads to E z log p 0 (x 0 ) -f (0) = log p 0 (x 0 ) -E z f (0) -f (1) = log p 0 (x 0 ) -E z 1 0 z T Dv 1-s (ϕ 1-s (x))z ds = log p 0 (x 0 ) - 1 0 E z z T Dv 1-s (ϕ 1-s (x))z ds = log p 0 (x 0 ) - 1 0 div(v 1-s (ϕ 1-s (x)))ds = log p 0 (x 0 ) -(f (0) -f (1)) = log p 0 (x 0 ) -(log p 0 (x 0 ) -log p 1 (x 1 )) = log p 1 (x 1 ), where in the third equality we switched order of integration assuming the sufficient condition of Fubini's theorem hold, and in the previous to last equality we used equation 30. Therefore the random variable log p 0 (x 0 ) -f (0) is an unbiased estimator for log p 1 (x 1 ). To summarize, for a scalable unbiased estimation of p 1 (x 1 ) we first solve the ODE in equation 34 with initial conditions in equation 32, and then output equation 35. Transformed data. Often, before training our generative model we transform the data, e.g., we scale and/or translate the data. Such a transformation is denoted by φ -1 : R d → R d and our generative model becomes a composition ψ(x) = φ • ϕ(x) where ϕ : R d → R d is the model we train. Given a prior probability p 0 we have that the push forward of this probability under ψ (equation 3 and equation 4) takes the form p 1 (x) = ψ * p 0 (x) = p 0 (ϕ -1 (φ -1 (x))) det Dϕ -1 (φ -1 (x)) det Dφ -1 (x) = ϕ * p 0 (φ -1 (x)) det Dφ -1 (x) and therefore log p 1 (x) = log ϕ * p 0 (φ -1 (x)) + log det Dφ -1 (x) . For images d = H × W × 3 we consider a transform ϕ that maps each pixel value from [-1, 1] to [0, 256]. Therefore, φ(y) = 2 7 (y + 1), and φ -1 (x) = 2 -7 x -1. For this case we have log p 1 (x) = log ϕ * p 0 (φ -1 (x)) -7d log 2. ( ) Bits-Per-Dimension (BPD) computation. BPD is defined by BPD = E x1 - log 2 p 1 (x 1 ) d = E x1 - log p 1 (x 1 ) d log 2 Following equation 36 we get BPD = - log ϕ * p 0 (φ -1 (x)) d log 2 + 7. and log ϕ * p 0 (φ -1 (x)) is approximated using the unbiased estimator in equation 35 over the transformed data φ -1 (x 1 ). Averaging the unbiased estimator on a large test test x 1 provides a good approximation to the test set BPD.

D DIFFUSION CONDITIONAL VECTOR FIELDS

We derive the vector field governing the Probability Flow ODE (equation 13 in Song et al. (2020b) ) for the VE and VP diffusion paths (equation 18) and note that it coincides with the conditional vector fields we derive using Theorem 3, namely the vector fields defined in equations 16 and 19. We start with a short primer on how to find a conditional vector field for the probability path described by the Fokker-Planck equation, then instantiate it for the VE and VP probability paths. Since in the diffusion literature the diffusion process runs from data at time t = 0 to noise at time t = 1, we will need the following lemma to translate the diffusion VFs to our convention of t = 0 corresponds to noise and t = 1 corresponds to data: Lemma 1. Consider a flow defined by a vector field u t (x) generating probability density path p t (x). Then, the vector field ũt (x) = -u 1-t (x) generates the path pt (x) = p 1-t (x) when initiated from p0 (x) = p 1 (x). Proof. We use the continuity equation (equation 25): d dt pt (x) = d dt p 1-t (x) = -p ′ 1-t (x) = div(p 1-t (x)u 1-t (x)) = -div(p t (x)(-u 1-t (x))) and therefore ũt (x) = -u 1-t (x) generates pt (x). 

E IMPLEMENTATION DETAILS

For the 2D example we used an MLP with 5-layers of 512 neurons each, while for images we used the UNet architecture from Dhariwal & Nichol (2021) . For images, we center crop images and resize to the appropriate dimension, whereas for the 32×32 and 64×64 resolutions we use the same pre-processing as (Chrabaszcz et al., 2017) . The three methods (FM-OT, FM-Diffusion, and SM-Diffusion) are always trained on the same architecture, same hyper-parameters, and for the same number of epochs.

E.1 DIFFUSION BASELINES

Losses. We consider three options as diffusion baselines that correspond to the most popular diffusion loss parametrizations (Song & Ermon, 2019; Song et al., 2021; Ho et al., 2020; Kingma et al., 2021) . We will assume general Gaussian path form of equation 10, i.e., p t (x|x 1 ) = N (x|µ t (x 1 ), σ 2 t (x 1 )I).

Score Matching loss is

L SM (θ) = E t,q(x1),pt(x|x1) λ(t) ∥s t (x) -∇ log p t (x|x 1 )∥ 2 (42) = E t,q(x1),pt(x|x1) λ(t) s t (x) - x -µ t (x 1 ) σ 2 t (x 1 ) 2 . ( ) Taking λ(t) = σ 2 t (x 1 ) corresponds to the original Score Matching (SM) loss from Song & Ermon (2019), while considering λ(t) = β(1 -t) (β is defined below) corresponds to the Score Flow (SF) loss motivated by an NLL upper bound (Song et al., 2021) When computing FID/Inception scores for CIFAR10, ImageNet-32/64 we use the TensorFlow GAN libraryfoot_1 . To remain comparable to Dhariwal & Nichol (2021) for ImageNet-128 we use the evaluation script they include in their publicly available code repositoryfoot_2 . 



We use subscript to denote the time parameter, e.g., pt(x). https://github.com/tensorflow/gan https://github.com/openai/guided-diffusion



Figure 2: Compared to the diffusion path's conditional score function, the OT path's conditional vector field has constant direction in time and is arguably simpler to fit with a parametric model. Note the blue color denotes larger magnitude while red color denotes smaller magnitude.

Figure 3: Diffusion and OT conditional trajectories.Intuitively, particles under the OT displacement map always move in straight line trajectories and with constant speed. Figure3depicts sampling paths for the diffusion and OT conditional VFs. Interestingly, we find that sampling trajectory from diffusion paths can "overshoot" the final sample, resulting in unnecessary backtracking, whilst the OT paths are guaranteed to stay straight.

Figure2compares the diffusion conditional score function (the regression target in a typical diffusion methods), i.e., ∇ log p t (x|x 1 ) with p t defined as in equation 18, with the OT conditional VF (equation 21). The start (p 0 ) and end (p 1 ) Gaussians are identical in both examples. An interesting observation is that the OT VF has a constant direction in time, which arguably leads to a simpler regression task. This property can also be verified directly from equation 21 as the VF can be written in the form u t (x|x 1 ) = g(t)h(x|x 1 ). Figure8in the Appendix shows a visualization of the Diffusion VF. Lastly, we note that although the conditional flow is optimal, this by no means imply that the marginal VF is an optimal transport solution. Nevertheless, we expect the marginal vector field to remain relatively simple.

Figure 4: (left) Trajectories of CNFs trained with different objectives on 2D checkerboard data. The OT path introduces the checkerboard pattern much earlier, while FM results in more stable training.(right) FM with OT results in more efficient sampling, solved using the midpoint scheme.

Figure6: Sample paths from the same initial noise with models trained on ImageNet 64×64. The OT path reduces noise roughly linearly, while diffusion paths visibly remove noise only towards the end of the path. Note also the differences between the generated images.

Figure 8: VP Diffusion path's conditional vector field. Compare to Figure 2.

Figure 10: Function evaluations for sampling during training, for models trained on CIFAR-10 using dopri5 solver with tolerance 1e -5 .

Figure 11: Non-curated unconditional ImageNet-32 generated images of a CNF trained with FM-OT.

Figure 12: Non-curated unconditional ImageNet-64 generated images of a CNF trained with FM-OT.

Figure 13: Non-curated unconditional ImageNet-128 generated images of a CNF trained with FM-OT.

Figure 14: Conditional generation 64×64→256×256. Flow Matching OT upsampled images from validation set.

Figure 15: Conditional generation 64×64→256×256. Flow Matching OT upsampled images from validation set.

, our work allows us to generalize beyond the class of probability paths modeled by simple diffusion. With our work, it is possible to completely sidestep the diffusion process construction and reason directly with probability paths, while still retaining efficient training and log-likelihood evaluations. Lastly, concurrently to our work(Liu et al., 2022;Albergo & Vanden-Eijnden, 2022) arrived at similar conditional objectives for simulation-free training of CNFs, whileNeklyudov et al. (2023) derived an implicit objective when u t is assumed to be a gradient field.

Likelihood (BPD), quality of generated samples (FID), and evaluation time (NFE) for the same model trained with different methods.

which uses conditioning with a self-supervised ResNet50 model, and therefore is left out of this table.Figures 11, 12, 13  in the Appendix show non-curated samples from these models. Flow Matching, especially when using OT paths, allows us to use fewer evaluations for sampling while retaining similar numerical error (left) and sample quality (right). Results are shown for models trained on ImageNet 32×32, and numerical errors are for the midpoint scheme. models can also be sampled through an SDE formulation, this can be highly inefficient and many methods that propose fast samplers (e.g.,Song et al. (2020a); Zhang & Chen (2022)) directly make use of the ODE perspective (see Appendix D). In part, this is due to ODE solvers being much more efficient-yielding lower error at similar computational costs(Kloeden et al., 2012)-and the multitude of available ODE solver schemes. When compared to our ablation models, we find that models trained using Flow Matching with the OT path always result in the most efficient sampler, regardless of ODE solver, as demonstrated next.

Image super-resolution on the ImageNet validation set.

; s t is the learnable score function. DDPM (Noise Matching) loss from Ho et al. (2020) (equation 14) isL NM (θ) = E t,q(x1),pt(x|x1) ϵ t (x) -x -µ t (x 1 ) σ t (x 1 ) = E t,q(x1),p0(x0) ϵ t (σ t (x 1 )x 0 + µ t (x 1)) -x 0 Negative log-likelihood (in bits per dimension) on the test set with different values of K using uniform dequantization.

ACKNOWLEDGEMENTS

, PyTorch Lightning (Falcon & team, 2019), Hydra , Matplotlib We thank Qinqing Zheng for her feedback. HB is supported by a grant from Israel CHE Program for Data Science Research Centers. Additionally, we acknowledge the Python community (Van Rossum & Drake Jr, 1995; Oliphant, 2007) for developing the core set of tools that enabled this work, including PyTorch (Paszke et al., 2019) , PyTorch Lightning (Falcon & team, 2019), Hydra (Yadan, 2019) , Jupyter (Kluyver et al., 2016) , Matplotlib (Hunter, 2007) , seaborn (Waskom et al., 2018) , numpy (Oliphant, 2006; Van Der Walt et al., 2011) , SciPy (Jones et al., 2014) , and torchdiffeq (Chen, 2018).

annex

Conditional VFs for Fokker-Planck probability paths Consider a Stochastic Differential Equation (SDE) of the standard form dy = f t dt + g t dw (38) with time parameter t, drift f t , diffusion coefficient g t , and dw is the Wiener process. The solution y t to the SDE is a stochastic process, i.e., a continuous time-dependent random variable, the probability density of which, p t (y t ), is characterized by the Fokker-Planck equation:where ∆ represents the Laplace operator (in y), namely div∇, where ∇ is the gradient operator (also in y). Rewriting this equation in the form of the continuity equation can be done as follows (Maoutsa et al., 2020a) :where the vector fieldsatisfies the continuity equation with the probability path p t , and therefore generates p t .Variance Exploding (VE) path The SDE for the VE path is dy = d dt σ 2 t dw, where σ 0 = 0 and increasing to infinity as t → 1. The SDE is moving from data, y 0 , at t = 0 to noise, y 1 , at t = 1 with the probability path p t (y|y 0 ) = N (y|y 0 , σ 2 t I). The conditional VF according to equation 40 is:Using Lemma 1 we get that the probability path pt (y|y 0 ) = N (y|y 0 , σ 2 1-t I) is generated bywhich coincides with equation 17.

Variance Preserving (VP) path

The SDE for the VP path iswhereand p t (y|y 0 ) = N (y|e -1 2 T (t) y 0 , (1 -e -T (t) )I). Plugging these choices in equation 40 we get the conditional VFUsing Lemma 1 to reverse the time we get the conditional VF for the reverse probability path: where β min = 0.1, β max = 20 and time is sampled in [0, 1 -ϵ], ϵ = 10 -5 for training and likelihood and ϵ = 10 -5 for sampling.Sampling. Score matching samples are produced by solving the ODE (equation 1) with the vector fieldDDPM samples are computed with equation 46 after setting s t (x) = ϵ t (x)/σ t , where σ t = 1 -α 2 1-t .

E.2 TRAINING & EVALUATION DETAILS

We report the hyper-parameters used in Table 3 . We use full 32 bit-precision for training CIFAR10 and ImageNet-32 and 16-bit mixed precision for training ImageNet-64/128/256. All models are trained using the Adam optimizer with the following parameters: β 1 = 0.9, β 2 = 0.999, weight decay = 0.0, and ϵ = 1e-8. All methods we trained (i.e., FM-OT, FM-Diffusion, SM-Diffusion) using identical architectures, with the same parameters for the the same number of Epochs (see Table 3 for details). We use either a constant learning rate schedule or a polynomial decay schedule (see Table 3 ). The polynomial decay learning rate schedule includes a warm-up phase for a specified number of training steps. In the warm-up phase, the learning rate is linearly increased from 1e-8 to the peak learning rate (specified in Table 3 ). Once the peak learning rate is achieved, it linearly decays the learning rate down to 1e-8 until the final training step.When reporting negative log-likelihood, we dequantize using the standard uniform dequantization.We report an importance-weighted estimate usingwith x is in {0, . . . , 255} and solved at t = 1 with an adaptive step size solver dopri5 with atol=rtol=1e-5 using the torchdiffeq (Chen, 2018) library. Estimated values for different values of K are in Table 4 . 

