QUASI-TAYLOR SAMPLERS FOR DIFFUSION GENERA-TIVE MODELS BASED ON IDEAL DERIVATIVES Anonymous authors Paper under double-blind review

Abstract

Diffusion generative models have emerged as a new challenger to popular deep neural generative models such as GANs, but have the drawback that they often require a huge number of neural function evaluations (NFEs) during synthesis unless some sophisticated sampling strategies are employed. This paper proposes new efficient samplers based on the numerical schemes derived by the familiar Taylor expansion, which directly solves the ODE/SDE of interest. In general, it is not easy to compute the derivatives that are required in higher-order Taylor schemes, but in the case of diffusion models, this difficulty is alleviated by the trick that the authors call "ideal derivative substitution," in which the higher-order derivatives are replaced by tractable ones. To derive ideal derivatives, the authors argue the "single point approximation," in which the true score function is approximated by a conditional one, holds in many cases, and considered the derivatives of this approximation. Applying thus obtained new quasi-Taylor samplers to image generation tasks, the authors experimentally confirmed that the proposed samplers could synthesize plausible images in small number of NFEs, and that the performance was better or at the same level as DDIM and Runge-Kutta methods. The paper also argues the relevance of the proposed samplers to the existing ones mentioned above.

1. INTRODUCTION

Generative modeling based on deep neural networks is an important research subject for both fundamental and applied purposes, and has been a major trend in machine learning studies for several years. To date, various types of neural generative models have been studied including GANs (Goodfellow et al., 2014) , VAEs (Kingma et al., 2021; Kingma & Welling, 2019) , normalizing flows (Rezende & Mohamed, 2015) , and autoregressive models (van den Oord et al., 2016b; a) . In addition to these popular models, a class of novel generative models based on the idea of iteratively refinement using the diffusion process has been rapidly gaining attention recently as a challenger that rivals the classics above (Sohl-Dickstein et al., 2015; Song & Ermon, 2019; Song et al., 2020b; Song & Ermon, 2020; Ho et al., 2020; Dhariwal & Nichol, 2021) . The diffusion-based generative models have recently been showing impressive results in many fields including image (Ho et al., 2020; Vahdat et al., 2021; Saharia et al., 2021; Ho et al., 2021; Sasaki et al., 2021 ), video (Ho et al., 2022 ), text-to-image (Nichol et al., 2021; Ramesh et al., 2022 ), speech (Chen et al., 2020; 2021; Kong et al., 2021; Popov et al., 2021; Kameoka et al., 2020 ), symbolic music (Mittal et al., 2021) , natural language (Hoogeboom et al., 2021; Austin et al., 2021 ), chemoinformatics (Xu et al., 2022) , etc. However, while the diffusion models have good synthesis quality, it has been said that they have a fatal drawback that they often require a very large number of iterations (refinement steps) during synthesis, ranging from hundreds to a thousand. In particular, the increase in refinement steps critically reduces the synthesis speed, as each step involves at least one neural function evaluation (NFE). Therefore, it has been a common research question how to establish a systematic method to stably generate good data from diffusion models in a relatively small number of refinement steps, or NFEs in particular. From this motivation, there have already been some studies aiming at reducing the NFEs (See § 2). Among these, Probability Flow ODE (PF-ODE) (Song et al., 2020b) enable efficient and deterministic sampling, and is gaining attention. This framework has the merit of deriving a simple ODE by a straightforward conceptual manipulation of diffusion process. However, the ODE is eventually solved by using a black-box Runge-Kutta solver in the original paper, which requires several NFEs per step and is clearly costly. Another PF-ODE solver includes DDIM (Song et al., 2020a) , and is also commonly used. It is certainly efficient and can generate plausible images. However, it was not originally formulated as a PF-ODE solver, and the relationship between DDIM and PF-ODE is not straightforward. From these motivations, we provide another sampler to solve the same ODE, which performs better than or on par with DDIM. The derivation outline is simple and intuitive: (1) consider the Taylor expansion of the given system, and (2) replace the derivatives in the Taylor series with appropriate functions; that's all. The contribution of this paper would be as follows: (1) We propose novel samplers for diffusion models based on Taylor expansion of PF-ODE. They outperformed, or were on par with Runge-Kutta methods. (2) To derive our algorithms, we show that the derivatives of score function can be approximated by simple functions. We call this technique the ideal derivative substitution. (3) It has been known that the 1st order term of DDIM is same as the Euler method for PF-ODE. This paper gives further explanation for higher order terms of DDIM: we show that the proposed Quasi-Taylor method and DDIM are identical at least up to 3rd order terms. (4) The same idea can be naturally extended to derive a stochastic solver for a reverse-time SDE, which we call R-SDE in this paper.

2. BACKGROUND AND RELATED WORK

Diffusion Process to draw a new data from a target density: Let us first briefly summarize the framework of the diffusion-based generative models. Following Song et al. (2020b) , we describe the mechanisms using the language of continuous-time diffusion process for later convenience. Let us consider "particles" {x t } moving in a d-dim space obeying the following Itô diffusion, SDE: dx t = f (x t , t)dt + g(x t , t)dB t , where B t is the d-dim Brownian motion whose temporal increments obeys the standard Gaussian. The drift f (•, •) is d-dim vector, and the diffusion coefficient g(•, •) is scalar. The SDE describes the microscopic dynamics of each particle. On the other hand, the "population" of the particles obeying the above SDE, i.e. density function p(x t , t | x s , s), (t > s), follows the following PDEs, which are known as Kolmogorov's forward and backward equations (KFE and KBE); the former is also known as the Fokker-Planck equation (FPE), see § E.2, FPE: ∂ t p(x t , t | x s , s) = -∇ xt • f (x t , t)p(x t , t | x s , s) + ∆ xt g(x t , t) 2 2 p(x t , t | x s , s), (2) KBE: -∂ s p(x t , t | x s , s) = f (x s , s) • ∇ xs p(x t , t | x s , s) + g(x s , s) 2 2 ∆ xs p(x t , t | x s , s), where ∆ x := ∇ x • ∇ x is Laplacian. (FPE also holds for p(x t , t); consider the expectation E p(xs,s) [•] .) These PDEs enables us to understand the macroscopic behavior of the particle ensemble. For example, if f (x, t) = -∇U (x), g(x, t) = √ 2D, where U (x) a certain potential and D a constant, then we may verify that the stationary solution of FPE is p(x) ∝ e -U (x)/D . It means that we may draw a sample x that follows the stationary density by evolving the SDE over time. This technique is often referred to as the Langevin Monte Carlo method (Rossky et al., 1978; Roberts & Tweedie, 1996) . Some of the diffusion generative models are based on this framework, e.g. (Song & Ermon, 2019; 2020) , in which the potential gradient ∇U (x) is approximated by a neural network. Another systematic approach is considering the reverse-time dynamics (Song et al., 2020b ). An approach is based on KBE eq. (3). Roughly speaking, FPE gives information about the future from the initial density, while KBE gives information about what the past states were likely to be from the terminal density. Here, instead of using KBE directly, it is useful to consider a variant of it which is transformed into the form of FPE, because it has an associated SDE that enables the particle-wise backward sampling (Stratonovich, 1965; Anderson, 1982) (5) Hereafter, let g(x t , t) = g(t) for simplicity. Then the specific forms of drift and diffusion coefficients are written as follows, R-SDE coeffs: f (x t , t) = f (x t , t) := f (x t , t)g(t) 2 ∇ xt log p(x t , t), ḡ(t) = g(t). (6)



; see also § E.3.2, R-FPE: -∂ s p(x s , s | x t , t) = ∇ xs • f (x s , s)p(x s , s | x t , t) + ∆ xs ḡ(x s , s)

