DPM-SOLVER++: FAST SOLVER FOR GUIDED SAM-PLING OF DIFFUSION PROBABILISTIC MODELS

Abstract

Diffusion probabilistic models (DPMs) have achieved impressive success in highresolution image synthesis, especially in recent large-scale text-to-image generation applications. An essential technique for improving the sample quality of DPMs is guided sampling, which usually needs a large guidance scale to obtain the best sample quality. The commonly-used fast sampler for guided sampling is DDIM, a first-order diffusion ODE solver that generally needs 100 to 250 steps for highquality samples. Although recent works propose dedicated high-order solvers and achieve a further speedup for sampling without guidance, their effectiveness for guided sampling has not been well-tested before. In this work, we demonstrate that previous high-order fast samplers suffer from instability issues, and they even become slower than DDIM when the guidance scale grows large. To further speed up guided sampling, we propose DPM-Solver++, a high-order solver for the guided sampling of DPMs. DPM-Solver++ solves the diffusion ODE with the data prediction model and adopts thresholding methods to keep the solution matches training data distribution. We further propose a multistep variant of DPM-Solver++ to address the instability issue by reducing the effective step size. Experiments show that DPM-Solver++ can generate high-quality samples within only 15 to 20 steps for guided sampling by pixel-space and latent-space DPMs.

1. INTRODUCTION

Diffusion probabilistic models (DPMs) (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2021b) have achieved impressive success on various tasks, such as high-resolution image synthesis (Dhariwal & Nichol, 2021; Ho et al., 2022; Rombach et al., 2022) , image editing (Meng et al., 2022; Saharia et al., 2022a; Zhao et al., 2022) , text-to-image generation (Nichol et al., 2021; Saharia et al., 2022b; Ramesh et al., 2022; Rombach et al., 2022; Gu et al., 2022) , voice synthesis (Liu et al., 2022a; Chen et al., 2021a; b) , molecule generation (Xu et al., 2022; Hoogeboom et al., 2022; Wu et al., 2022) and data compression (Theis et al., 2022; Kingma et al., 2021) . Compared with other deep generative models such as GANs (Goodfellow et al., 2014) and VAEs (Kingma & Welling, 2014) , DPMs can even achieve better sample quality by leveraging an essential technique called guided sampling (Dhariwal & Nichol, 2021; Ho & Salimans, 2021) , which uses additional guidance models to improve the sample fidelity and the condition-sample alignment. Through it, DPMs in text-to-image and image-to-image tasks can generate high-resolution photorealistic and artistic images which are highly correlated to the given condition, bringing a new trend in artificial intelligence art painting. The sampling procedure of DPMs gradually removes the noise from pure Gaussian random variables to obtain clear data, which can be viewed as discretizing either the diffusion SDEs (Ho et al., 2020; Song et al., 2021b) or the diffusion ODEs (Song et al., 2021b; a) defined by a parameterized noise prediction model or data prediction model (Ho et al., 2020; Kingma et al., 2021) . Guided sampling of DPMs can also be formalized with such discretizations by combining an unconditional model with a guidance model, where a hyperparameter controls the scale of the guidance model (i.e. guidance scale). The commonly-used method for guided sampling is DDIM (Song et al., 2021a) , which is proven as a first-order diffusion ODE solver (Salimans & Ho, 2022; Lu et al., 2022) and it generally needs 100 to 250 steps of large neural network evaluations to converge, which is time-consuming. Dedicated high-order diffusion ODE solvers (Lu et al., 2022; Zhang & Chen, 2022 ) can generate high-quality samples in 10 to 20 steps for sampling without guidance. However, their effectiveness DDIM (order = 1) (Song et al., 2021a) DPM-Solver (order = 2) (Lu et al., 2022) PNDM (order = 2) (Liu et al., 2022b) DPM-Solver-3 (order = 3) (Lu et al., 2022) DEIS-1 (order = 2) (Zhang & Chen, 2022) † DDIM (thresholding) (Saharia et al., 2022b) DEIS-2 (order = 3) (Zhang & Chen, 2022) DPM-Solver++ (order = 2) (ours) Figure 1 : Previous high-order solvers are unstable for guided sampling: Samples using the pre-trained DPMs (Dhariwal & Nichol, 2021) on ImageNet 256×256 with a classifier guidance scale 8.0, varying different samplers (and different solver orders) with only 15 function evaluations. †: DDIM with the dynamic thresholding (Saharia et al., 2022b) . Our proposed DPM-Solver++ (detailed in Algorithm 2) can generate better samples than the first-order DDIM, while other high-order samplers are worse than DDIM. for guided sampling has not been carefully examined before. In this work, we demonstrate that previous high-order solvers for DPMs generate unsatisfactory samples for guided sampling, even worse than the simple first-order solver DDIM. We identify two challenges of applying high-order solvers to guided sampling: (1) the large guidance scale narrows the convergence radius of high-order solvers, making them unstable; and (2) the converged solution does not fall into the same range with the original data (a.k.a. "train-test mismatch" (Saharia et al., 2022b) ). Based on the observations, we propose DPM-Solver++, a training-free fast diffusion ODE solver for guided sampling. We find that the parameterization of the DPM critically impacts the solution quality. Subsequently, we solve the diffusion ODE defined by the data prediction model, which predicts the clean data given the noisy ones. We derive a high-order solver for solving the ODE with the data prediction parameterization, and adopt dynamic thresholding methods (Saharia et al., 2022b) to mitigate the train-test mismatch problem. Furthermore, we develop a multistep solver which uses smaller step sizes to address instability. As shown in Fig. 1 , DPM-Solver++ can generate high-quality samples in only 15 steps, which is much faster than all the previous training-free samplers for guided sampling. Our additional experimental results show that DPM-Solver++ can generate high-fidelity samples and almost converge within only 15 to 20 steps, for a wide variety of guided sampling applications, including both pixel-space DPMs and latent-space DPMs.

2. DIFFUSION PROBABILISTIC MODELS

In this section, we review diffusion probabilistic models (DPMs) and their sampling methods.

2.1. FAST SAMPLING FOR DPMS BY DIFFUSION ODES

Diffusion Probabilistic Models (DPMs) (Sohl-Dickstein et al., 2015; Ho et al., 2020; Song et al., 2021b) gradually add Gaussian noise to a D-dimensional random variable x 0 ∈ R D to perturb the corresponding unknown data distribution q 0 (x 0 ) at time 0 to a simple normal distribution q T (x T ) ≈ N (x T |0, σ2 I) at time T > 0 for some σ > 0. The transition distribution q t0 (x t |x 0 ) at each time t ∈ [0, T ] satisfies q t0 (x t |x 0 ) = N (x t |α t x 0 , σ 2 t I), where α t , σ t > 0 and the signal-to-noise-ratio (SNR) α 2 t /σ 2 t is strictly decreasing w.r.t. t (Kingma et al., 2021) . Eq. ( 1) can be written as x t = α t x 0 + σ t ϵ, where ϵ ∼ N (0, I). Parameterization: noise prediction and data prediction DPMs learn to recover the data x 0 based on the noisy input x T with a sequential denoising procedure. There are two alternative ways to define the model. The noise prediction model ϵ θ (x t , t) attempts to predict the noise ϵ from the data x t , which optimizes the parameter θ by the following objective (Ho et al., 2020; Song et al., 2021b) : min θ E x0,ϵ,t ω(t)∥ϵ θ (x t , t) -ϵ∥ 2 2 , where x 0 ∼ q 0 (x 0 ), ϵ ∼ N (0, I), t ∼ U([0, 1]), and ω(t) > 0 is a weighting function. Alternatively, the data prediction model x θ (x t , t) predicts the original data x 0 based on the noisy x t , and its relationship with ϵ θ (x t , t) is given by x θ (x t , t) := (x t -σ t ϵ θ (x t , t))/α t (Kingma et al., 2021) . Diffusion ODEs Sampling by DPMs can be implemented by solving the diffusion ODEs (Song et al., 2021b; a; Liu et al., 2022b; Zhang & Chen, 2022; Lu et al., 2022) , which is generally faster than other sampling methods. Specifically, sampling by diffusion ODEs need to discretize the following ODE (Song et al., 2021b) with t changing from T to 0: dx t dt = f (t)x t + g 2 (t) 2σ t ϵ θ (x t , t), x T ∼ N (0, σ2 I), and the equivalent diffusion ODE w.r.t. the data prediction model x θ is dx t dt = f (t) + g 2 (t) 2σ 2 t x t - α t g 2 (t) 2σ 2 t x θ (x t , t), x T ∼ N (0, σ2 I), where the coefficients Kingma et al., 2021) . f (t) = d log αt dt , g 2 (t) = dσ 2 t dt -2 d log αt dt σ 2 t (

2.2. GUIDED SAMPLING FOR DPMS

Guided sampling (Dhariwal & Nichol, 2021; Ho & Salimans, 2021 ) is a widely-used technique to apply DPMs for conditional sampling, which is useful in text-to-image, image-to-image, and class-to-image applications (Dhariwal & Nichol, 2021; Saharia et al., 2022b; Rombach et al., 2022; Nichol et al., 2021; Ramesh et al., 2022) . Given a condition variable c, guided sampling defines a conditional noise prediction model εθ (x t , t, c). There are two types of guided sampling methods, depending on whether they require a classifier model. Classifier guidance (Dhariwal & Nichol, 2021) leverages a pretrained classifier p ϕ (c|x t , t) to define the conditional noise prediction model by: εθ (x t , t, c) := ϵ θ (x t , t) -s • σ t ∇ xt log p ϕ (c|x t , t), where s > 0 is the guidance scale. In practice, a large s is usually preferred for improving the condition-sample alignment (Rombach et al., 2022; Saharia et al., 2022b) for guided sampling. Classifier-free guidance (Ho & Salimans, 2021) shares the same parameterized model ϵ θ (x t , t, c) for the unconditional and conditional noise prediction models, where the input c for the unconditional model is a special placeholder ∅. The corresponding conditional model is defined by: εθ (x t , t, c) := s • ϵ θ (x t , t, c) + (1 -s) • ϵ θ (x t , t, ∅). Then, samples can be drawn by solving the ODE (3) with ϵ θ (x t , t, c) in place of ϵ θ (x t , t). DDIM (Song et al., 2021a ) is a typical solver for guided sampling, which generates samples in a few hundreds of steps.

2.3. EXPONENTIAL INTEGRATORS AND HIGH-ORDER ODE SOLVERS

It is shown in recent works (Lu et al., 2022; Zhang & Chen, 2022 ) that ODE solvers based on exponential integrators (Hochbruck & Ostermann, 2010) converge much faster than the traditional solvers for solving the unconditional diffusion ODE (3). Given an initial value x s at time s > 0, Lu et al. (2022) derive the solution x t of the diffusion ODE (3) at time t as: x t = α t α s x s -α t λt λs e -λ εθ ( xλ , λ)dλ, where the ODE is changed from the time (t) domain to the log-SNR (λ) domain by the change-ofvariables formula. Here, the log-SNR λ t := log(α t /σ t ) is a strictly decreasing function of t with the inverse function t λ (•), and xλ := x t λ (λ) , εθ ( xλ , λ) := ϵ θ (x t λ (λ) , t λ (λ)) are the corresponding change-of-variable forms for λ. Lu et al. (2022) showed that DDIM is a first-order solver for Eq. ( 7). They further proposed a high-order solver named "DPM-Solver", which can generate realistic samples for the unconditional model in only 10-20 steps. Unfortunately, the outstanding efficiency of existing high-order solvers does not transfer to guided sampling, which we shall discuss soon.

3. CHALLENGES OF HIGH-ORDER SOLVERS FOR GUIDED SAMPLING

Before developing new fast solvers, we first examine the performance of existing high-order diffusion ODE solvers and highlight the challenges. The first challenge is the large guidance scale causes high-order solvers to be instable. As shown in Fig. 1 , for a large guidance scale s = 8.0 and 15 function evaluations, previous high-order diffusion ODE solvers (Lu et al., 2022; Zhang & Chen, 2022; Liu et al., 2022b) produce low-quality images. Their sample quality is even worse than the first-order DDIM. Moreover, the sample quality becomes even worse as the order of the solver gets higher. Intuitively, large guidance scales may amplify both the output and the derivatives of the model εθ in Eq. ( 5). The derivatives of the model affect the convergence range of ODE solvers, and the amplification may cause high-order ODE solvers to need much smaller step sizes to converge, and thus the higher-order solvers may perform worse than the first-order solver. Moreover, high-order solvers require high-order derivatives, which are generally more sensitive to the amplifications. This further narrows the convergence radius. The second challenge is the "train-test mismatch" problem (Saharia et al., 2022b) . The data lie in a bounded interval (e.g. [-1, 1] for image data). However, the large guidance scale pushes the conditional noise prediction model εθ (x t , t, c) away from the true noise, which in turns make the sample (i.e. the converged solution x 0 of diffusion ODEs) to fall out of the bound. In this case, the generated images are saturated and unnatural (Saharia et al., 2022b) .

4. DESIGNING TRAINING-FREE FAST SAMPLERS FOR GUIDED SAMPLING

In this section, we design novel high-order diffusion ODE solvers for faster guided sampling. As discussed in Sec. 3, previous high-order solvers have instability and "train-test mismatch" issues for large guidance scales. The "train-test mismatch" issue arises from the ODE itself, and we find the parameterization of the ODE is critical for the converged solution to be bounded. While previous high-order solvers are designed for the noise prediction model εθ , we solve the ODE (4) for the data prediction model x θ , which itself has some advantages and thresholding methods are further available to keep the samples bounded (Ho et al., 2020; Saharia et al., 2022b) . We also propose a multistep solver to address the instability issue.

4.1. DESIGNING SOLVERS BY DATA PREDICTION MODEL

We follow the notations in Lu et al. (2022) . Given a sequence {t i } M i=0 decreasing from t 0 = T to t M = 0 and an initial value x t0 ∼ N (0|σ 2 I), the solver aims to iteratively compute a sequence { xti } M i=0 to approximate the exact solution at each time t i , and the final value xt M is the approximated sample by the diffusion ODE. Denote h i := λ ti -λ ti-1 for i = 1, . . . , M . For solving the diffusion ODE w.r.t. x θ in Eq. ( 4), we firstly propose a simplified formulation of the exact solution of diffusion ODEs w.r.t. x θ below. Such formulation exactly computes the linear term in Eq. ( 4) and only remains an exponentially-weighted integral of x θ . Denote xθ ( xλ , λ) := x θ (x t λ (λ) , t λ (λ)) as the change-of-variable form of x θ for λ, we have: Proposition 4.1 (Exact solution of diffusion ODEs of x θ , proof in Appendix A). Given an initial value x s at time s > 0, the solution x t at time t ∈ [0, s] of diffusion ODEs in Eq. ( 4) is: x t = σ t σ s x s + σ t λt λs e λ xθ ( xλ , λ)dλ. ( ) As the diffusion ODEs in Eq. ( 3) and Eq.( 4) are equivalent, the exact solution formulations in Eq. ( 7) and Eq. ( 8) are also equivalent. However, from the prespective of designing ODE solvers, these two formulations are different. Firstly, Eq. ( 7) exactly computes the linear term αt αs x s , while Eq. ( 8) exactly computes another the linear term σt σs x s . Moreover, to design ODE solvers, Eq. ( 7) needs to approximate the integral e -λ ϵ θ dλ, while Eq. ( 8) needs to approximate e λ x θ dλ, and these two integrals are different (recall that x θ := (x t -σ t ϵ θ )/α t ). Therefore, the high-order solvers based on Eq. ( 7) and Eq. ( 8) are essentially different. We further propose the general manner for design high-order ODE solvers based on Eq. ( 8) below. Given the previous value xti-1 at time t i-1 , the aim of our solver is to approximate the exact solution at time t i . Denote x (n) θ (λ) := d n xθ (x λ ,λ) dλ n as the n-th order total derivatives of x θ w.r.t. logSNR λ. For k ≥ 1, taking the (k -1)-th Taylor expansion at λ ti-1 for x θ w.r.t. λ ∈ [λ ti-1 , λ ti ] and substitute it into Eq. ( 8) with s = t i-1 and t = t i , we have xti = σ ti σ ti-1 xti-1 + σ ti k-1 n=0 x (n) θ ( xλt i-1 , λ ti-1 ) estimated λt i λt i-1 e λ (λ -λ ti-1 ) n n! dλ analytically computed (Appendix A) + O(h k+1 i ) omitted , where the integral e λ (λ-λt i-1 ) n n! dλ can be analytically computed by integral-by-parts (detailed in Appendix A). Therefore, to design a k-th order ODE solver, we only need to estimate the n-th order derivatives x (n) θ (λ ti-1 ) for n ≤ k -1 after omitting the O(h k+1 i ) high-order error terms, which are well-studied techniques and we discussed in details in Sec. 4.2. A special case is k = 1, where the solver is the same as DDIM (Song et al., 2021a ), and we discuss in Sec. 5.1. For k = 2, we use a similar technique as DPM-Solver-2 (Lu et al., 2022) to estimate the derivative x (1) θ ( xλt i-1 , λ ti-1 ). Specifically, we introduce an additional intermediate time step s i between t i-1 and t i and combine the function values at s i and t i-1 to approximate the derivative, which is the standard manner for singlestep ODE solvers (Atkinson et al., 2011) . Overall, we need 2M + 1 time steps ({t i } M i=0 and {s i } M i=1 ) which satisfies t 0 > s 1 > t 1 > • • • > t M -1 > s M > t M . The detailed algorithm is proposed in Algorithm 1, where we combine the previous value xti-1 at time t i-1 with the intermediate value u i at time s i to compute the value xti at time t i . Algorithm 1 DPM-Solver++(2S). Require: initial value x T , time steps {t i } M i=0 and {s i } M i=1 , data prediction model x θ . 1: xt0 ← x T . 2: for i ← 1 to M do 3: h i ← λ ti -λ ti-1 4: r i ← λs i -λt i-1 hi 5: u i ← σs i σt i-1 xti-1 -α si e -rihi -1 x θ ( xti-1 , t i-1 ) 6: D i ← (1 -1 2ri )x θ ( xti-1 , t i-1 ) + 1 2ri x θ (u i , s i ) 7: xti ← σt i σt i-1 xti-1 -α ti e -hi -1 D i 8: end for 9: return xtM Algorithm 2 DPM-Solver++(2M). Require: initial value x T , time steps {t i } M i=0 , data prediction model x θ . 1: Denote h i := λ ti -λ ti-1 for i = 1, . . . , M . 2: xt0 ← x T . Initialize an empty buffer Q. 3: Q buffer ← ---x θ ( xt0 , t 0 ) 4: xt1 ← σt 1 σt 0 x0 -α t1 e -h1 -1 x θ ( xt0 , t 0 ) 5: Q buffer ← ---x θ ( xt1 , t 1 ) 6: for i ← 2 to M do 7: r i ← hi-1 hi 8: D i ← 1 + 1 2ri x θ ( xti-1 , t i-1 ) -1 2ri x θ ( xti-2 , t i-2 ) 9: xti ← σt i σt i-1 xti-1 -α ti e -hi -1 D i 10: If i < M , then Q buffer ← ---x θ ( xti , t i ) 11: end for 12: return xtM We name the algorithm as DPM-Solver++(2S), which means that the proposed solver is a second-order singlestep method. We present the theoretical guarantee of the convergence order in Appendix A. For Such method loses the previous information and may be inefficient. In this section, we propose another second-order diffusion ODE solver which uses the previous information at each step. In general, to approximate the derivatives x (n) θ in Eq. ( 9) for n ≥ 1, there is another mainstream approach (Atkinson et al., 2011) : multistep methods (such as Adams-Bashforth methods). Given the previous values { xtj } i-1 j=0 at time t i-1 , multistep methods just reuse the previous values to approximate the high-order derivatives. Multistep methods are empirically more efficient than singlestep methods, especially for limited number of function evaluations. (Atkinson et al., 2011) We combine the techniques for designing multistep solvers with the Taylor expansions in Eq. ( 9) and further propose a multistep second-order solver for diffusion ODEs with x θ . The detailed algorithm is proposed in Algorithm 2, where we combine the previous values xti-1 and xti-2 to compute the value xti without additional intermediate values. We name the algorithm as DPM-Solver++(2M), which means that the proposed solver is a second-order multistep solver. We also present a detailed theoretical guarantee of the convergence order, which is stated in Appendix A. For a fixed budget N of the total number of function evaluations, multistep methods can use M = N steps, while k-th order singlestep methods can only use no more than M = N/k steps. Therefore, each step size h i of multistep methods is around 1/k of that of singlestep methods, so the high-order error terms O(h k i ) in Eq. ( 9) of multistep methods may also be smaller than those of singlestep methods. We show in Sec. 6.1 that the multistep methods are slightly better than singlestep methods.

4.3. COMBINING THRESHOLDING WITH DPM-SOLVER++

For distributions of bounded data (such as the image data), thresholding methods (Ho et al., 2020; Saharia et al., 2022b) can push out-of-bound samples inwards and somehow reduce the adverse impact of the large guidance scale. Specifically, thresholding methods define a clipped data prediction model xθ (x t , t, c) by elementwise clipping the original model x θ := (x t -σ t ϵ θ )/α t within the data bound, which results in better sample quality for large guidance scales (Saharia et al., 2022b) . As our proposed DPM-Solver++ is designed for the x θ model, we can straightforwardly combine thresholding methods with DPM-Solver++.

5. RELATIONSHIP WITH OTHER FAST SAMPLING METHODS

In essence, all training-free sampling methods for DPMs can be understood as either discretizing diffusion SDEs (Ho et al., 2020; Song et al., 2021b; Jolicoeur-Martineau et al., 2021; Tachibana et al., 2021; Kong & Ping, 2021; Bao et al., 2022b; Zhang et al., 2022) or discretizing diffusion ODEs (Song et al., 2021b; a; Liu et al., 2022b; Zhang & Chen, 2022; Lu et al., 2022) . As DPM-Solver++ is designed for solving diffusion ODEs, in this section, we discuss the relationship between DPM-Solver++ and other diffusion ODE solvers. We further briefly discuss other fast sampling methods for DPMs.

5.1. COMPARISON WITH SOLVERS BASED ON EXPONENTIAL INTEGRATORS

Previous state-of-the-art fast diffusion ODE solvers (Lu et al., 2022; Zhang & Chen, 2022 ) leverages exponential integrators to solve diffusion ODEs with noise prediction models ϵ θ . In short, these solvers approximate the exact solution in Eq. ( 7) and include DDIM (Song et al., 2021a) as the first-order case. Below we show that the first-order case for DPM-Solver++ is also DDIM. For k = 1, Eq. ( 9) becomes (after omitting the O(h k+1 i ) terms) xti = σ ti σ ti-1 xti-1 + σ ti x θ ( xti-1 , t i-1 ) λt i λt i-1 e λ dλ = σ ti σ ti-1 xti-1 -α ti (e -hi -1)x θ ( xti-1 , t i-1 ), Therefore, our proposed DPM-Solver++ is the high-order generalization of DDIM w.r.t. the data prediction model x θ . To the best of our knowledge, such generalization has not been proposed before. We list the detailed difference between previous high-order solvers based on exponential integrators and DPM-Solver++ in Table 1 . We emphasize that although the first-order version of these solvers are equivalent, the high-order versions of these solvers are rather different.

5.2. OTHER FAST SAMPLING METHODS

Samplers based on diffusion SDEs (Ho et al., 2020; Song et al., 2021b; Jolicoeur-Martineau et al., 2021; Tachibana et al., 2021; Kong & Ping, 2021; Bao et al., 2022b; Zhang et al., 2022) generally needs more steps to converge than those based on diffusion ODEs (Lu et al., 2022) , because SDEs introduce more randomness and make denoising more difficult. Samplers based on extra training include model distillation (Salimans & Ho, 2022; Luhman & Luhman, 2021) , learning reverse process variances (San-Roman et al., 2021; Nichol & Dhariwal, 2021; Bao et al., 2022a) , and learning sampling steps (Lam et al., 2021; Watson et al., 2022) . However, training-based samplers are hard to scale-up to pre-trained large DPMs (Saharia et al., 2022b; Rombach et al., 2022; Ramesh et al., 2022) . There are other fast sampling methods by modifying the original DPMs to a latent space (Vahdat et al., 2021) or with momentum (Dockhorn et al., 2022) . In addition, combining DPMs with GANs (Xiao et al., 2022; Wang et al., 2022) improves the sample quality of GANs and sampling speed of DPMs.

6. EXPERIMENTS

In this section, we show that DPM-Solver++ can speed up both the pixel-space DPMs and the latent-space DPMs for guided sampling. We vary different number of function evaluations (NFE) which is the numebr of calls to the model ϵ θ (x t , t, c) or x θ (x t , t, c), and compare DPM-Solver++ with the previous state-of-the-art fast samplers for DPMs including DPM-Solver (Lu et al., 2022) , DEIS (Zhang & Chen, 2022) , PNDM (Liu et al., 2022b) and DDIM (Song et al., 2021a) . We also convert the discrete-time DPMs to the continuous-time and use these continuous-time solvers. We refer to Appendix C for the detailed implementations and experiment settings. As previous solvers did not test the performance in guided sampling, we also carefully tune the baseline samplers by ablating the step size schedule (i.e. the choice for the time steps {t i } M i=0 ) and the solver order. We find that • For the step size schedule, we search the time steps in the following choices: uniform t (a widely-used setting in high-resolution image synthesis), uniform λ (used in (Lu et al., 2022) ), uniform split of the power functions of t (used in (Zhang & Chen, 2022) , detailed in Appendix C), and we find that the best choice is uniform t. Thus, we use uniform t for the time steps in all of our experiments for all of the solvers. • We find that for a large guidance scale, the best choice for all the previous solvers is the second-order (i.e. DPM-Solver-2 and DEIS-1). However, for a comprehensive comparison, we run all the orders of previous solvers, including DPM-Solver-2 and DPM-Solver-3; DEIS-1, DEIS-2 and DEIS-3 and choose their best result for each NFE in our comparison. We run both DPM-Solver++(2S) and DPM-Solver++(2M), and we find that for large guidance scales, the multistep DPM-Solver++(2M) performs better; and for a slightly small guidance scales, the singlestep DPM-Solver++(2S) performs better. We report the best results of DPM-Solver++ and all of the previous samplers in the following sections, the detailed values are listed in Appendix D.

6.1. PIXEL-SPACE DPMS WITH GUIDANCE

We firstly compare DPM-Solver++ with other samplers for the guided sampling with classifierguidance on ImageNet 256x256 dataset by the pretrained DPMs (Dhariwal & Nichol, 2021) . We measure the sample quality by drawing 10K samples and computing the widely-used FID score (Heusel et al., 2017) , where lower FID usually implies better sample quality. We also adopt the dynamic thresholding method (Saharia et al., 2022b) for both DDIM and DPM-Solver++. We vary the guidance scale s in 8.0, 4.0 and 2.0, the results are shown in Fig. 3(a-c ). We find that for large guidance scales, all the previous high-order samplers (DEIS, PNDM, DPM-Solver) converge slower than the first-order DDIM, which shows that previous high-order samplers are unstable. Instead, DPM-Solver++ achieve the best speedup performance for both large guidance scales and small guidance scales. Especially for large guidance scales, DPM-Solver++ can almost converge within only 15 NFE. As an ablation, we also compare the singlestep DPM-Solver-2, the singlestep DPM-Solver++(2S) and the multistep DPM-Solver++(2M) to demonstrate the effectiveness of our method. We use a large guidance scale s = 8.0 and conduct the following ablations: • From ϵ θ to x θ : As shown in Fig. 2a , by simply changing the solver from ϵ θ to x θ (i.e. from DPM-Solver-2 to DPM-Solver++(2S)), the solver can achieve a stable acceleration performance which is faster than the first-order DDIM. Such result indicates that for guided sampling, high-order solvers w.r.t. x θ may be more preferred than those w.r.t. ϵ θ . • From singlestep to multistep: As show in Fig. 2b , the multistep DPM-Solver++(2M) converges slightly faster than the singlestep DPM-Solver++(2S), which almost converges in 15 NFE. Such result indicates that for guided sampling with a large guidance scale, multistep methods may be faster than singlestep methods. • With or without thresholding: We compare the performance of DDIM and DPM-Solver++ with / without thresholding methods in Fig. 2c . Note that the thresholding method changes the model x θ and thus also changes the converged solutions of diffusion ODEs. Firstly, we find that after using the thresholding method, the diffusion ODE can generate higher quality samples, which is consistent with the conclusion in (Saharia et al., 2022b) . Secondly, the sample quality of DPM-Solver++ with thresholding outperforms DPM-Solver++ without thresholding under the same NFE. Moreover, when combined with thresholding, DPM-Solver++ is faster than the first-order DDIM, which shows that DPM-Solver++ can also speed up guided sampling by DPMs with thresholding methods.

6.2. LATENT-SPACE DPMS WITH GUIDANCE

We also evaluate DPM-Solver++ on the latent-space DPMs (Rombach et al., 2022) , which is recently popular among the community due to their official code "stable-diffusion". We use the default guidance scale s = 7.5 in their official code. The latent-space DPMs map the image data with a latent code by training a pair of encoder and decoder, and then train a DPM for the latent code. As the latent code is unbounded, we do not apply the thresholding method. Specifically, we randomly sample 10K caption-image pairs from the MS-COCO2014 validation dataset and use the captions as conditions to draw 10K images from the pretrained "stable-diffusion" model, and we only draw a single image sample of each caption, following the standard evaluation procedures in (Nichol et al., 2021; Rombach et al., 2022) . We find that all the solvers can achieve a FID around 15.0 to 16.0 even within only 10 steps, which is very close to the FID computed by the converged samples reported in the official page of "stable-diffusion". We believe it is due to the powerful pretrained decoder, which can map a non-converged latent code to a good image sample. For latent-space DPMs, different diffusion ODE solvers directly affect the convergence speed on the latent space. To further compare different samplers for latent-space DPMs, we directly compare different solvers according to the convergence error on the latent space by the L2-norm between the sampled x 0 and the true solution x * 0 (and the error between them is ∥x 0 -x * 0 ∥ 2 / √ D). Specifically, we firstly sample 10K noise variables from the standard normal distribution and fix them. Then we sample 10K latent codes by different DPM samplers, starting from the 10K fixed noise variables. As all these solvers can be understood as discretizing diffusion ODEs, we compare the sampled latent codes by the true solution x * 0 from a 999-step DDIM with samples x 0 by different samplers within different NFE, and the results are shown in Fig. 3(d) . We find that the supported fast samplers (DDIM and PNDM) in "stable-diffusion" converge much slower than DPM-Solver++ and DEIS, and we find that the second-order multistep DPM-Solver++ and DEIS achieve a quite close speedup on the latent space. Moreover, as "stable-diffusion" by default use PNDM with 50 steps, we find that DPM-Solver++ can achieve a similar convergence error with only 15 to 20 steps. We also present an empirical comparison of the sampled images between different solvers in Appendix D, and we find that DPM-Solver++ can indeed generate quite good image samples within only 15 to 20 steps.

7. CONCLUSIONS

We study the problem of accelerating guided sampling of DPMs. We demonstrate that previous high-order solvers based on the noise prediction models are abnormally unstable and generate worsequality samples than the first-order solver DDIM for guided sampling with large guidance scales. To address this issue and speed up guided sampling, we propose DPM-Solver++, a training-free fast diffusion ODE solver for guided sampling. DPM-Solver++ is based on the diffusion ODE with the data prediction models, which can directly adopt the thresholding methods to stabilize the sampling procedure further. We propose both singlestep and multistep variants of DPM-Solver++. Experiment results show that DPM-Solver++ can generate high-fidelity samples and almost converge within only 15 to 20 steps, applicable for pixel-space and latent-space DPMs.

ETHICS STATEMENT

Like other deep generative models such as GANs, DPMs may also be used to generate adverse fake contents (images). The proposed solver can accelerate the guided sampling by DPMs which can further be used for image editing and generate photorealistic fake images. Such influence may further amplify the potential undesirable affects of DPMs for malicious applications.

REPRODUCIBILITY STATEMENT

Our code is based on the official code of DPM-Solver (Lu et al., 2022) and the pretrained checkpoints in Dhariwal & Nichol (2021) and Stable-Diffusion (Rombach et al., 2022) . We will release it after the blind review. In addition, datasets used in experiments are publicly available. Our detailed experiment settings and implementations are listed in Appendix C, and the proof of the solver convergence guarantee are presented in Appendix A. A ADDITIONAL PROOFS We make the following assumptions as in Lu et al. (2022) for x θ , i.e., • x (0) θ , x θ and x (2) θ exist and are continuous (and hence are bounded). • The map x → x θ (x, t) is L-Lipschitz. • h max := max 1≤j≤M h j = O(1/M ). We also assume further • r i > c > 0 for all i = 1, • • • , M . Then, both algorithms are second-order: Proposition A.1. Under the above assumptions, when h max is sufficiently small, we have for both Algorithms 1 and 2, ∥x t M -xt M ∥ = O(h 2 max ).

A.2.1 CONVERGENCE OF ALGORITHM 1

The convergence proof of this algorithm is similar to that in DPM-Solver-2 (Lu et al., 2022) . We give it in this section for completeness. First, Taylor's expansion gives x si = α si α ti-1 x ti-1 -α si (e -rihi -1)x θ (x ti-1 , t i-1 ) + O(h 2 i ), x ti = α ti α ti-1 x ti-1 -α ti (e -hi -1)x θ (x ti-1 , t i-1 ) -α ti -e -hi -h i + 1 x (1) θ (x ti-1 , t i-1 ) + O(h 3 i ). Let ∆ i := ∥ xti -x ti ∥, then ∥u i -x si ∥ ≤ C∆ i-1 + CLh i ∆ i-1 + Ch 2 i . Note that x (1) θ (x ti-1 , t i-1 ) - 1 r i h i x θ (x si , s i ) -x θ (x ti-1 , t i-1 ) ≤ Ch i . Since r i is bounded away from zero, and e -hi = 1 - h i + h 2 i /2 + O(h 3 i ), we know (-e -hi -h i + 1)x (1) θ (x ti-1 , t i-1 ) - e -hi -1 2r i x θ (u i , s i ) -x θ ( xti-1 , t i-1 ) ≤ CLh i (∆ i-1 + ∥u i -x si ∥) + Ch 3 i + 1 r i e -hi -1 2 - -e -hi -h i + 1 h i x θ (x si , s i ) -x θ (x ti-1 , t i-1 ) ≤ CLh i (∆ i-1 + ∥u i -x si ∥) + Ch 3 i + Ch 2 i x θ (x si , s i ) -x θ (x ti-1 , t i-1 ) ≤ CLh i ∆ i-1 + C(L + M i )h 3 i , where M i = 1 + sup ti-1≤t≤ti ∥x (1) θ (x t , t)∥. Then, ∆ i could be estimated as follows. ∆ i ≤ α ti α ti-1 ∆ i-1 + Ch i (∆ i-1 + h 2 i ). Thus, ∆ i = O(h 2 max ) as long as h max is sufficiently small.

A.3 CONVERGENCE OF ALGORITHM 2

Following the same line of argument of the convergence proof of Algorithm 1, we can prove the convergence of Algorithm 2. Let ∆ i := ∥ xtix ti ∥. Taylor's expansion yields x ti - α ti α ti-1 x ti-1 -α ti (e -hi -1)x θ (x ti-1 , t i-1 ) -α ti -e -hi -h i + 1 x (1) θ (x ti-1 , t i-1 ) ≤ Ch 3 i , where C is a constant depends on x (2) θ . Also note that x (1) θ (x ti-1 , t i-1 ) - 1 h i-1 x θ (x ti-1 , t i-1 ) -x θ (x ti-2 , t i-2 ) ≤ Ch i , Since r i is bounded away from zero, and e -hi = 1 - h i + h 2 i /2 + O(h 3 i ), we know (-e -hi -h i + 1)x (1) θ (x ti-1 , t i-1 ) - e -hi -1 2r i x θ ( xti-1 , t i-1 ) -x θ ( xti-2 , t i-2 ) ≤ CLh i (∆ i-1 + ∆ i-2 ) + Ch 3 i + 1 r i e -hi -1 2 - -e -hi -h i + 1 h i x θ (x ti-1 , t i-1 ) -x θ (x ti-2 , t i-2 ) ≤ CLh i (∆ i-1 + ∆ i-2 ) + Ch 3 i + Ch 2 i x θ (x ti-1 , t i-1 ) -x θ (x ti-2 , t i-2 ) ≤ CLh i (∆ i-1 + ∆ i-2 ) + CM i h 3 i , where M i = 1 + sup ti-1≤t≤ti ∥x (1) θ (x t , t)∥. Then, ∆ i could be estimated as follows. ∆ i ≤ α ti α ti-1 ∆ i-1 + α ti (1 -e -hi )L∆ i-1 + α ti CM i h 3 i + CLh i (∆ i-1 + ∆ i-2 ) + Ch 3 i ≤ α ti α ti-1 ∆ i-1 + Ch i (∆ i-1 + ∆ i-2 + h 2 i ). Thus, ∆ i = O(h 2 max ) as long as h max is sufficiently small and ∆ 0 + ∆ 1 = O(h 2 max ), which can be verified by the Taylor's expansion.

B COMPARISON BETWEEN DPM-SOLVER AND DPM-SOLVER++

In this section, we convert DPM-Solver++(2S) to the formulation w.r.t. the noise prediction model and compare it with the second-order DPM-Solver (Lu et al., 2022) . At each step, the second-order DPM-Solver (DPM-Solver-2 (Lu et al., 2022) ) has the following updating: u i = α si α ti-1 xti-1 -σ si (e rihi -1)ϵ θ ( xti-1 , t i-1 ) xti = α ti α ti-1 xti-1 -σ ti (e hi -1)ϵ θ ( xti-1 , t i-1 ) - σ ti 2r i (e hi -1) ϵ θ (u i , s i ) -ϵ θ ( xti-1 , t i-1 ) while DPM-Solver++(2S) has the following updating: u i = σ si σ ti-1 xti-1 -α si (e -rihi -1)x θ ( xti-1 , t i-1 ) xti = σ ti σ ti-1 xti-1 -α ti (e -hi -1) (1 - 1 2r i )x θ ( xti-1 , t i-1 ) + 1 2r i x θ (u i , s i ) (11) Because x θ (x, t) = x -σ t ϵ θ (x, t) α t = 1 α t x -e -λt ϵ θ (x, t) we can rewrite DPM-Solver++(2S) w.r.t. the noise prediction model (see Appendix B.1 for details): u i = α si α ti-1 xti-1 -σ si (e rihi -1)ϵ θ ( xti-1 , t i-1 ) xti = α ti α ti-1 xti-1 -σ ti (e hi -1)ϵ θ ( xti-1 , t i-1 ) - σ ti 2r i (e hi -1) e -rihi <1 ϵ θ (u i , s i ) -ϵ θ ( xti-1 , t i-1 ) Comparing with Eq. 10, we can find that the only difference between DPM-Solver-2 and DPM-Solver++(2S) is that DPM-Solver++(2S) has an additional coefficient e -rihi < 1 at the second term (which is corresponding to approximating the first-order total derivative ϵ (1) θ ). Specifically, we have ϵ θ (u i , s i ) -ϵ θ ( xti-1 , t i-1 ) = ϵ (1) θ ( xti-1 , t i-1 ) + O(h i ) As DPM-Solver++(2S) multiplies a smaller coefficient into the O(h i ) error term, the constant before the high-order error term of DPM-Solver++(2S) is smaller than that of DPM-Solver-2. As they both are equivalent to a second-order discretization of the diffusion ODE, a smaller constant before the error term can result in a smaller discretization error and reducing the numerical instabilities (especially for large guidance scales). Therefore, using the data prediction model is a key for stabilizing the sampling, and DPM-Solver++(2S) is more stable than DPM-Solver-2.

B.1 DETAILED DERIVATION

We can rewrite DPM-Solver++(2S) by:  u i = σ si σ ti-1 xti-1 -α si (e -rihi -1)x θ ( xti-1 , t i-1 ) = σ si σ ti-1 xti-1 - α si α ti-1 (e -λs i +λt i-1 -1) xti-1 + α si (e -λs i -e -λt i-1 )ϵ θ ( xti-1 , t i-1 ) = α si α ti-1 xti-1 -σ si (e rihi -1)ϵ θ ( xti-1 , t i-1 ) and xti = σ ti σ ti-1 xti-1 -α ti (e -hi -1) (1 - 1 2r i )x θ ( xti-1 , t i-1 ) + 1 2r i x θ (u i , s i ) = α ti α ti-1 xti-1 -σ ti (e hi -1)ϵ θ ( xti-1 , t i-1 ) - α ti 2r i (e -hi -1)(x θ (u i , s i ) -x θ ( xti-1 , t i-1 )) = α ti α ti-1 xti-1 -σ ti (e hi -1)ϵ θ ( xti-1 , t i-1 ) + σ ti 2r i (e hi -1)e λt i-1 (x θ (u i , s i ) -x θ ( xti-1 , t i-1 )) and e λt i-1 x θ (u i , s i ) -x θ ( xti-1 , t i-1 ) = e λt i-1 1 α si u i - 1 α ti-1 xti-1 -e -λs i ϵ θ (u i , s i ) + e -λt i-1 ϵ θ ( xti-1 , t i-1 ) = e λt i-1 -e -λs i (e λs i -λt i-1 -1)ϵ θ ( xti-1 , t i-1 ) -e -λs i ϵ θ (u i , s i ) + e -λt i-1 ϵ θ ( xti-1 , t i-1 ) = e λt i-1 e -λs i ϵ θ ( xti-1 , t i-1 ) -e -λs i ϵ θ (u i , s i ) = e -rihi ϵ θ ( xti-1 , t i-1 ) -ϵ θ (u i , s i ) so we have xti = α ti α ti-1 xti-1 -σ ti (e hi -1)ϵ θ ( xti-1 , t i-1 ) - σ ti 2r i (e hi -1)e -rihi ϵ θ (u i , s i ) -ϵ θ ( xti-1 , t i-1 )

D EXPERIMENT DETAILS

We list all the detailed experimental results in this section. 



, θ (DPM-Solver) 2nd-order, xθ (DPM-Solver++) (a) From ϵ θ to x θ .

, multistep (DPM-Solver++) † 1st-order (DDIM) † 2nd-order, multistep (DPM-Solver++) (c) Thresholding.

Figure 2: Ablation study for DPM-Solver++. Sample quality measured by FID ↓ of different sampling methods for DPMs on ImageNet 256x256 with guidance scale 8.0, varying the number of function evaluations (NFE).

Figure3: (a-c) Sample quality measured by FID ↓ of different sampling methods for DPMs on ImageNet 256x256 with different guidance scale s, varying the number of function evaluations (NFE). † : results by combining the solver with dynamic thresholding method(Saharia et al., 2022b). (d) Convergence error measured by L2-norm ↓ (divided by dimension) between different sampling methods and 1000-step DDIM, varying the number of function evaluations (NFE), for the latent-space DPM "stable-diffusion"(Rombach et al., 2022) on MS-COCO2014 validation set, with the default guidance scale s = 7.5 in their official code.

Figure 4: Samples of different sampling methods for DPMs on ImageNet 256x256 with guidance scale 8.0.

Comparison between high-order diffusion ODE solvers based on exponential integrators, including DEIS(Zhang & Chen, 2022), DPM-Solver(Lu et al., 2022) and DPM-Solver++ (ours). as discussed in Sec. 3, high-order solvers may be unsuitable for large guidance scales, thus we mainly consider k = 2 in this work, and leave the solvers for higher orders for future study.

Sample quality measured by FID ↓ on ImageNet 256×256 (discrete-time model(Dhariwal & Nichol,   2021)), varying the methods between DDIM(Song et al., 2021a)  and different types of DEIS(Zhang & Chen,  2022). The number of function evaluations (NFE) is fixed by 10.

Sample quality measured by FID ↓ on ImageNet 256×256 (discrete-time model(Dhariwal & Nichol,   2021)), varying the number of function evaluations (NFE).

Sample quality measured by MSE ↓ on COCO2014 validation set (discrete-time latent model(Rombach et al., 2022)), varying the number of function evaluations (NFE). Guidance scale is 7.5, which is the recommended setting for stable-diffusion(Rombach et al., 2022).

C IMPLEMENTATION DETAILS C.1 CONVERTING DISCRETE-TIME DPMS TO CONTINUOUS-TIME

Discrete-time DPMs (Ho et al., 2020) train the noise prediction model ϵ θ at N fixed time steps {t n } N n=1 and the noise prediction model is parameterized by εθ (x n , 1000n N ) for n = 1, . . . , N , where each x n is corresponding to the value at time t n . In practice, these discrete-time DPMs usually choose uniform time steps between [0, T ], thus t n = nT N , for n = 1, . . . , N . The smallest time is T N . Moreover, for the widely-used DDPM (Ho et al., 2020) , we usually choose a sequence {β n } N n=1 which is defined by either linear schedule (Ho et al., 2020) or cosine schedule (Nichol & Dhariwal, 2021) . After obtained the β n sequence, the noise schedule α n is defined bywhere each α n is corresponding to the continuous-time t n = nT N , i.e. α tn = α n . To generalize the discrete α n to the continuous version, we use a linear interpolation for the function log α n . Specifically, for each t ∈ [t n , t n+1 ], we defineTherefore, we can obtain a continuous-time noise schedule α t defined for all t ∈ [ T N , T ], and the std σ t = 1 -α 2 t and the logSNR λ t = log α t -log σ t . Moreover, the logSNR λ t is strictly decreasing for t, thus the change-of-variable for λ is still valid.In practice, we usually have T = 1 and N = 1000, thus the smallest time is 10 -3 . Therefore, we solve the diffusion ODEs from time t = 1 to time t = 10 -3 to get our final sample. Such sampling can reduce the first-order discrete-time DDIM solver when using a uniform time step.

C.2 ABLATING TIME STEPS

Previous DEIS only tuned on low-resolutional data CIFAR-10, which may be not suitable for highresolutional data such as ImageNet 256x256 and large guidance scales for guided sampling. For a fair comparison with the baseline samplers, we firstly do ablation study for the time steps with the pretrained DPMs (Dhariwal & Nichol, 2021) on ImageNet 256x256 and vary the classifier guidance scale. In our experiments, we tune the time step schedule according to their power function choices. Specifically, let t M = 10 -3 and t 0 = 1, the time steps {t i } M i=0 satisfieswhere κ is a hyperparameter. Following Zhang & Chen (2022), we search κ in 1, 2, 3 by DEIS, and the results are shown in Table 2 . We find that for all guidance scales, the best setting is κ = 1, i.e. the uniform t for time steps. We further compare uniform t and uniform λ and find that the uniform t time step schedule is still the best choice. Therefore, in all of our experiments, we use the uniform t for evaluations.

C.3 EXPERIMENT SETTINGS

We use uniform time step schedule for all experiments. Particularly, as DPM-Solver (Lu et al., 2022) is designed for uniform λ (the intermediate time steps are a half of the step size w.r.t. λ), we also convert the intermediate time steps to ensure all the time steps are uniform t. We find that such conversion can improve the sample quality of both the singlestep DPM-Solver the singlestep DPM-Solver++.We run NFE in 10, 15, 20, 25 for the high-order solvers and additional 50, 100, 250 for DDIM. For all experiments, we solver diffusion ODEs from t = 1 to t = 10 -3 with the interpolation of noise schedule detailed in Appendix C.1. For DEIS, we use the "t-AB-k" methods for k = 1, 2, 3, which is the fastest method in their original paper, and we name them as DEIS-k, respectively.For the sampled image in Fig. 5 , we use the prompt "A beautiful castle beside a waterfall in the woods, by Josef Thoma, matte painting, trending on artstation HQ". 

