QUASI-TAYLOR SAMPLERS FOR DIFFUSION GENERA-TIVE MODELS BASED ON IDEAL DERIVATIVES Anonymous authors Paper under double-blind review

Abstract

Diffusion generative models have emerged as a new challenger to popular deep neural generative models such as GANs, but have the drawback that they often require a huge number of neural function evaluations (NFEs) during synthesis unless some sophisticated sampling strategies are employed. This paper proposes new efficient samplers based on the numerical schemes derived by the familiar Taylor expansion, which directly solves the ODE/SDE of interest. In general, it is not easy to compute the derivatives that are required in higher-order Taylor schemes, but in the case of diffusion models, this difficulty is alleviated by the trick that the authors call "ideal derivative substitution," in which the higher-order derivatives are replaced by tractable ones. To derive ideal derivatives, the authors argue the "single point approximation," in which the true score function is approximated by a conditional one, holds in many cases, and considered the derivatives of this approximation. Applying thus obtained new quasi-Taylor samplers to image generation tasks, the authors experimentally confirmed that the proposed samplers could synthesize plausible images in small number of NFEs, and that the performance was better or at the same level as DDIM and Runge-Kutta methods. The paper also argues the relevance of the proposed samplers to the existing ones mentioned above.

1. INTRODUCTION

Generative modeling based on deep neural networks is an important research subject for both fundamental and applied purposes, and has been a major trend in machine learning studies for several years. To date, various types of neural generative models have been studied including GANs (Goodfellow et al., 2014 ), VAEs (Kingma et al., 2021; Kingma & Welling, 2019) , normalizing flows (Rezende & Mohamed, 2015) , and autoregressive models (van den Oord et al., 2016b; a) . In addition to these popular models, a class of novel generative models based on the idea of iteratively refinement using the diffusion process has been rapidly gaining attention recently as a challenger that rivals the classics above (Sohl-Dickstein et al., 2015; Song & Ermon, 2019; Song et al., 2020b; Song & Ermon, 2020; Ho et al., 2020; Dhariwal & Nichol, 2021) . The diffusion-based generative models have recently been showing impressive results in many fields including image (Ho et al., 2020; Vahdat et al., 2021; Saharia et al., 2021; Ho et al., 2021; Sasaki et al., 2021 ), video (Ho et al., 2022) , text-to-image (Nichol et al., 2021; Ramesh et al., 2022 ), speech (Chen et al., 2020; 2021; Kong et al., 2021; Popov et al., 2021; Kameoka et al., 2020 ), symbolic music (Mittal et al., 2021) , natural language (Hoogeboom et al., 2021; Austin et al., 2021 ), chemoinformatics (Xu et al., 2022) , etc. However, while the diffusion models have good synthesis quality, it has been said that they have a fatal drawback that they often require a very large number of iterations (refinement steps) during synthesis, ranging from hundreds to a thousand. In particular, the increase in refinement steps critically reduces the synthesis speed, as each step involves at least one neural function evaluation (NFE). Therefore, it has been a common research question how to establish a systematic method to stably generate good data from diffusion models in a relatively small number of refinement steps, or NFEs in particular. From this motivation, there have already been some studies aiming at reducing the NFEs (See § 2). Among these, Probability Flow ODE (PF-ODE) (Song et al., 2020b) enable efficient and deterministic sampling, and is gaining attention. This framework has the merit of deriving a simple ODE by a straightforward conceptual manipulation of diffusion process. However, the ODE is eventually solved by using a black-box Runge-Kutta solver in the original paper, which requires several NFEs per step and is clearly costly. Another PF-ODE solver includes DDIM (Song et al., 2020a) , and is also

