LOSSY COMPRESSION WITH GAUSSIAN DIFFUSION

Abstract

We consider a novel lossy compression approach based on unconditional diffusion generative models, which we call DiffC. Unlike modern compression schemes which rely on transform coding and quantization to restrict the transmitted information, DiffC relies on the efficient communication of pixels corrupted by Gaussian noise. We implement a proof of concept and find that it works surprisingly well despite the lack of an encoder transform, outperforming the state-of-the-art generative compression method HiFiC on ImageNet 64x64. DiffC only uses a single model to encode and denoise corrupted pixels at arbitrary bitrates. The approach further provides support for progressive coding, that is, decoding from partial bit streams. We perform a rate-distortion analysis to gain a deeper understanding of its performance, providing analytical results for multivariate Gaussian data as well as theoretic bounds for general distributions. Furthermore, we prove that a flow-based reconstruction achieves a 3 dB gain over ancestral sampling at high bitrates.

1. INTRODUCTION

We are interested in the problem of lossy compression with perfect realism. As in typical lossy compression applications, our goal is to communicate data using as few bits as possible while simultaneously introducing as little distortion as possible. However, we additionally require that reconstructions X have (approximately) the same marginal distribution as the data, X ∼ X. When this constraint is met, reconstructions are indistinguishable from real data or, in other words, appear perfectly realistic. Lossy compression with realism constraints is receiving increasing attention as more powerful generative models bring a solution ever closer within reach. Theoretical arguments (Blau and Michaeli, 2018; 2019; Theis and Agustsson, 2021; Theis and Wagner, 2021) and empirical results (Tschannen et al., 2018; Agustsson et al., 2019; Mentzer et al., 2020) suggest that generative compression approaches have the potential to achieve significantly lower bitrates at similar perceived quality than approaches targeting distortions alone. The basic idea behind existing generative compression approaches is to replace the decoder with a conditional generative model and to sample reconstructions. Diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020) -also known as score-based generative models (Song et al., 2021; Dockhorn et al., 2022) -are a class of generative models which have recently received a lot of attention for their ability to generate realistic images (e.g., Dhariwal and Nichol, 2021; Nichol et al., 2021; Ho et al., 2022; Kim and Ye, 2022; Ramesh et al., 2022) . While generative compression work has mostly relied on generative adversarial networks (Goodfellow et al., 2014; Tschannen et al., 2018; Agustsson et al., 2019; Mentzer et al., 2020; Gao et al., 2021) , Saharia et al. (2021) provided evidence that this approach may also work well with diffusion models by using conditional diffusion models for JPEG artefact removal. In Section 3, we describe a novel lossy compression approach based on diffusion models. Unlike typical generative compression approaches, our approach relies on an unconditionally trained generative model. Modern lossy compression schemes comprise at least an encoder transform, a decoder transform, and an entropy model (Ballé et al., 2021; Yang et al., 2022) where our approach only uses a single model. Surprisingly, we find that this simple approach can work well despite lacking an encoder transform-instead, we add isotropic Gaussian noise directly to the pixels (Section 5). By using varying degrees of Gaussian noise, the same model can further be used to communicate data at arbitrary bitrates. The approach is naturally progressive, that is, reconstructions can be generated from an incomplete bitstream. To better understand why the approach works well, we perform a rate-distortion analysis in Section 4. We find that isotropic Gaussian noise is generally not optimal even for the case of Gaussian distributed data and mean-squared error (MSE) distortion. However, we also observe that isotropic noise is close to optimal. We further prove that a reconstruction based on the probability flow ODE (Song et al., 2021) cuts the distortion in half at high bit-rates when compared to ancestral sampling from the diffusion model. We will use capital letters such as X to denote random variables, lower-case letters such as x to denote corresponding instances and non-bold letters such as x i for scalars. We reserve log for the logarithm to base 2 and will use ln for the natural logarithm.

2. RELATED WORK

Many previous papers observed connections between variational autoencoders (VAEs; Kingma and Welling, 2014; Rezende et al., 2014) and rate-distortion optimization (e.g., Theis et al., 2017; Ballé et al., 2017; Alemi et al., 2018; Brekelmans et al., 2019; Agustsson and Theis, 2020) . More closely related to our approach, Agustsson and Theis (2020) turned a VAE into a practical lossy compression scheme by using dithered quantization to communicate uniform samples. Similarly, our scheme relies on random coding to communicate Gaussian samples and uses diffusion models, which can be viewed as hierarchical VAEs with a fixed encoder. Ho et al. (2020) considered the rate-distortion performance of an idealized but closely related compression scheme based on diffusion models. In contrast to Ho et al. (2020) , we are considering distortion under a perfect realism constraint and provide the first theoretical and empirical results demonstrating that the approach works well. Importantly, random coding is known to provide little benefit and can even hurt performance when only targeting a rate-distortion trade-off (Agustsson and Theis, 2020; Theis and Agustsson, 2021) . On the other hand, random codes can perform significantly better than deterministic codes when realism constraints are considered (Theis and Agustsson, 2021) . Ho et al. (2020) contemplated the use of minimal random coding (MRC; Havasi et al., 2019) to encode Gaussian samples. However, MRC only communicates an approximate sample. In contrast, we consider schemes which communicate an exact sample, allowing us to avoid issues such as error propagation. Finally, we use an upper bound instead of a lower bound as a proxy for the coding cost, which guarantees that our estimated rates are achievable. While modern lossy compression schemes rely on transform coding, very early work by Roberts (1962) experimented with dithered quantization applied directly to grayscale pixels. Roberts (1962) found that dither was perceptually more pleasing than the banding artefacts caused by quantization. Similarly, we apply Gaussian noise directly to pixels but additionally use a powerful generative model for entropy coding and denoising. Another line of work in compression explored anisotropic diffusion to denoise and inpaint missing pixels (Galić et al., 2008) . This use of diffusion is fundamentally different from ours. Anisotropic diffusion has the effect of smoothing an individual image whereas the diffusion processes considered in this paper are increasing high spatial frequency content of individual images but have a smoothing effect on the distribution over images. Yan et al. (2021) claimed that under a perfect realism constraint, the best achievable rate is R(D/2), where R is the rate-distortion function (Eq. 7). It was further claimed that optimal performance can be achieved by optimizing an encoder for distortion alone while ignoring the realism constraint and using ancestral sampling at the decoder. Contrary to these claims, we show that our approach can exceed this performance and achieve up to 3 dB better signal-to-noise ratio at the same rate (Figure 2 ). The discrepancy can be explained by Yan et al. (2021) only considering deterministic codes whereas we allow random codes with access to shared randomness. In random codes, the communicated bits not only depend on the data but are a function of the data and an additional source of randomness shared between the encoder and the decoder (typically implemented by a pseudo-random number generator). Our results are in line with the findings of Theis and Agustsson (2021) who showed on a toy example that shared randomness can lead to significantly better performance in the one-shot setting, and those of Zhang et al. (2021) and Wagner (2022) who studied the rate-distortion-perception function (Blau and Michaeli, 2019) of normal distributions. In this paper, we provide additional results for the multivariate Gaussian case (Section 4.2). An increasing number of neural compression approaches is targeting realism (e.g., Tschannen et al., 2018; Agustsson et al., 2019; Mentzer et al., 2020; Gao et al., 2021; Lytchier et al., 2021; Zeghidour et al., 2022) . However, virtually all of these approaches rely on transform coding combined with distortions based on VGG (Simonyan and Zisserman, 2015) and adversarial losses (Goodfellow et al., 2014) . In contrast, we use a single unconditionally trained diffusion model (Sohl-Dickstein et al., 2015) . Unconditional diffusion models have been used for lossless compression with the help of bitsback coding (Kingma et al., 2021) but bits-back coding by itself is unsuitable for lossy compression. Z T Z T -1 Z t+1 Z t Z t-1 Z 1 X X q(zs | zs+1, We show that significant bitrate savings can be achieved compared to lossless compression even by allowing imperceptible distortions (Fig. 3 ).

3. LOSSY COMPRESSION WITH DIFFUSION

The basic idea behind our compression approach is to efficiently communicate a corrupted version of the data, Z t = 1 -σ 2 t X + σ t U where U ∼ N (0, I), from the sender to the receiver, and then to use a diffusion generative model to generate a reconstruction. Z t can be viewed as the solution to a Gaussian diffusion process given by the stochastic differential equation (SDE) dZ t = - 1 2 β t Z t dt + β t dW t , Z 0 = X, where σ 2 t = 1 -e -t 0 βτ dτ (2) and W t is Brownian motion. Diffusion generative models try to invert this process by learning the conditional distributions p(z s | z t ) for s < t (Song et al., 2021) . If s and t are sufficiently close, then this conditional distribution is approximately Gaussian. We refer to Sohl-Dickstein et al. (2015) , Ho et al. (2020), and Song et al. (2021) for further background on diffusion models. Noise has a negative effect on the performance of typical compression schemes (Al-Shaykh and Mersereau, 1998) . However, Bennett and Shor (2002) proved that it is possible to communicate an instance of Z t using not much more than I[X, Z t ] bits. Note that this mutual information decreases as the level of noise increases. Li and El Gamal (2018) described a more concrete random coding approach for communicating an exact sample of Z t (Appendix A). An upper bound was provided for its coding cost, namely I[X, Z t ] + log(I[X, Z t ] + 1) + 5 bits. Notice that the second and third term become negligible when the mutual information is sufficiently large. If the sender and receiver do not have access to the true marginal of Z t but instead assume the marginal distribution to be p t , the upper bound on the coding cost becomes (Theis and Yosri, 2022 ) C t + log(C t + 1) + 5 where C t = E X [D KL [q(z t | X) p t (z t )]] and q is the distribution of Z t given X, which in our case is Gaussian. In practice, the coding cost can be significantly closer to C t than the upper bound (Theis and Yosri, 2022; Flamich et al., 2022) . We refer to Theis and Yosri (2022) for an introduction to the problem of efficient sample communication-also known as reverse channel coding-as well as a discussion of practical implementations of the approach of Li and El Gamal (2018) . To follow the results of this paper, the reader only needs to know that an exact sample of a distribution q can be communicated with a number of bits which is at most the bound given in Eq. 4, and that this is possible even when q is continuous. The bound above is analogous to the well-known result that the cost of entropy coding can be bounded in terms of H + 1, where H is a cross-entropy (e.g., Cover and Thomas, 2006) . However, to provide some intuition for reverse channel coding, we briefly describe the high-level idea. Candidates Z 1 t , Z 2 t , Z 3 t , . . . are generated by drawing samples from p t . The encoder then selects one of the candidates with index N * in a manner similar to rejection sampling such that Z N * t ∼ q. Since the candidates are independent of the data, they can be generated by both the sender and receiver (for example, using a pseudo-random number generator with the same random seed) and only the selected candidate's index N * needs to be communicated. The entropy of N * is bounded by Eq. 4. Further details and pseudocode are provided in Appendix A Unfortunately, Gaussian diffusion models do not provide us with tractable marginal distributions p t . Instead, they give us access to conditional distributions p(z s | z s+1 ) and assume p T is isotropic Gaussian. This suggests a scheme where we first transmit an instance of Z T and then successively refine the information received by the sender by transmitting an instance of Z s given Z s+1 until Z t is reached. This approach incurs an overhead for the coding cost of each conditional sample (which we consider in Fig. 10 of Appendix I). Alternatively, we can communicate a Gaussian sample from the joint distribution q(z T :t | X) directly while assuming a marginal distribution p(z T :t ). This achieves a coding cost upper bounded by Eq. 4 where C t = E [D KL [q(z T | X) p T (z T )]] + T -1 s=1 E [D KL [q(z s | Z s+1 , X) p(z s | Z s+1 )]] . (5) Reverse channel coding still poses several unsolved challenges in practice. In particular, the scheme proposed by Li and El Gamal (2018) is computationally expensive though progress on more efficient schemes is being made (Agustsson and Theis, 2020; Theis and Yosri, 2022; Flamich et al., 2022) . In the following we will mostly ignore issues of computational complexity and instead focus on the question of whether the approach described above is worth considering at all. After all, it is not immediately clear that adding isotropic Gaussian noise directly to the data would limit information in a useful way. We will consider two alternatives for reconstructing data given Z t . First, we will consider ancestral sampling, X ∼ p(x | Z t ), which corresponds to simulating the SDE in Eq. 2 in reverse (Song et al., 2021) . Second, we will consider a deterministic reconstruction which instead tries to reverse the ODE Maoutsa et al. (2020) and Song et al. (2021) showed that this "probability flow" ODE produces the same trajectory of marginal distributions p t as the Gaussian diffusion process in Eq. 2 and that it can be simulated using the same model of ∇ ln p t (z t ). We will refer to these alternatives as DiffC-A when ancestral sampling is used and DiffC-F when the flow-based reconstruction is used. dz t = - 1 2 β t z t - 1 2 β t ∇ ln p t (z t ) dt.

4. A RATE-DISTORTION ANALYSIS

In this section we try to understand the performance of DiffC from a rate-distortion perspective. This will be achieved by considering the Gaussian case where optimal rate-distortion trade-offs can be computed analytically and by providing bounds on the performance in the general case. Throughout this paper, we measure distortion in terms of squared error. For our theoretical analysis we will further assume that the diffusion model has learned the data distribution perfectly. The (information) rate-distortion function is given by R(D) = inf X I[X, X] subject to E[ X -X 2 ] ≤ D. It measures the smallest achievable bitrate for a given level of distortion and decreases as D increasesfoot_0 . The rate as defined above does not make any assumptions on the marginal distribution of the reconstructions. However, here we demand perfect realism, that is, X ∼ X. To achieve this constraint, a deterministic encoder requires a higher bitrate of R(D/2) (Blau and Michaeli, 2019; Theis and Agustsson, 2021) . As we will see below, lower bitrates can be achieved using random codes as in our diffusion approach. Nevertheless, R(D/2) serves as an interesting benchmark as most existing codecs use deterministic codes, that is, the bits received by the decoder are solely determined by the data. For an M -dimensional Gaussian data source whose covariance has eigenvalues λ i , the rate-distortion function is known to be (Cover and Thomas, 2006 ) R * (D) = 1 2 i log(λ i /D i ) where D i = min(λ i , θ) for some threshold θ chosen such that D = i D i . For sufficiently small distortion D and assuming positive eigenvalues, we have constant D i = θ = D/M .

4.1. STANDARD NORMAL DISTRIBUTION

As a simple first example, consider a standard normal distribution X ∼ N (0, 1). Using ancestral sampling, the reconstruction becomes X = 1 -σ 2 Z + σV where Z = 1 -σ 2 X + σU , U , V ∼ N (0, 1) and we have dropped the dependence on t to reduce clutter. The distortion and rate in this case are easily calculated to be D = E[(X -X) 2 ] = 2σ 2 , I[X, Z] = -log σ = 1 2 log 2 D = R * (D/2). ( ) This matches the performance of an optimal deterministic code. However, Z already has the desired standard normal distribution and adding further noise to it did nothing to increase the realism or reduce the distortion of the reconstruction. The flow-based reconstruction instead yields dZ t = 0 and X = Z (by inserting the standard normal for p t in Eq. 6), resulting in the smaller distortion D = E[(X -X) 2 ] = E[(X -Z) 2 ] = 2 -2 1 -σ 2 . ( )

4.2. MULTIVARIATE GAUSSIAN

Next, let us consider X ∼ N (0, Σ) and Z = √ 1 -σ 2 X + σU where U ∼ N (0, I). Assume λ i are the eigenvalues of Σ. Since both the squared reconstruction error and the mutual information between X and Z are invariant under rotations of X, we can assume the covariance to be diagonal. Otherwise we just rotate X to diagonalize the covariance matrix without affecting the results of our analysis. If X ∼ P (X | Z), we get the distortion and rate (Appendix C) D = E[ X -X 2 ] = 2 i Di , I[X, Z] = 1 2 i log(λ i / Di ) ≥ R * (D/2). ( ) where Di = λ i σ 2 /(σ 2 + λ i -λ i σ 2 ). That is, the performance is generally worse than the performance achieved by the best deterministic encoder. We can modify the diffusion process to improve the rate-distortion performance of ancestral sampling. Namely, let V i ∼ N (0, 1), Z i = 1 -γ 2 i X i + γ i λ i U i , Xi = 1 -γ 2 i Z i + γ i λ i V i , where γ 2 i = min(1, θ/λ i ) for some θ. This amounts to using a different noise schedule along different principal directions instead of adding the same amount of noise in all directions. For natural images, Figure 2 : A: Rate-distortion curves for a Gaussian source fitted to 16x16 image patches extracted from ImageNet 64x64. Isotropic noise performs nearly as well as the optimal noise (dashed). As an additional point of comparison, we include pink noise (P) matching the covariance of the data distribution. The curve of DiffC-A * corresponds to R * (D/2). A flow-based reconstruction yields up to 3 dB better signal-to-noise ratio (SNR). B: SNR broken down by principal component. The level of noise here is fixed to yield a rate of approximately 0.391 bits per dimension for each type of noise. Note that the SNR of DiffC-A * is zero for over half of the components. the modified schedule destroys information in high-frequency components more quickly (Fig. 2B ) and for Gaussian data sources again matches the performance of the best deterministic code, D = 2 i λ i γ 2 i = 2 i D i , I[X, Z] = -i log γ i = 1 2 i log(λ i /D i ) = R * (D/2) (14) where D i = λ i γ 2 i = min(λ i , θ). Still better performance can be achieved via flow-based reconstruction. Here, isotropic noise is again suboptimal and the optimal noise for a flow-based reconstruction is given by (Appendix D) Z i = α i X i + 1 -α 2 i λ i U i , where α i = λ 2 i + θ 2 -θ /λ i (15) for some θ ≥ 0. Z already has the desired distribution and we can set X = Z. We will refer to the two approaches using optimized noise as DiffC-A * and DiffC-F * , respectively, though strictly speaking these types of noise may no longer correspond to diffusion processes. Figure 2A shows the rate-distortion performance of the various noise schedules and reconstructions on the example of a 256-dimensional Gaussian fitted to 16x16 grayscale image patches extracted from 64x64 downsampled ImageNet images (van den Oord et al., 2016) . Here, SNR = 10 log 10 (2 • E[ X 2 ]) -10 log 10 (E[ X -X 2 ]).

4.3. GENERAL DATA DISTRIBUTIONS

Considering more general source distributions, our first result bounds the rate of DiffC-A * . Theorem 1. Let X : Ω → R M be a random variable with finite differential entropy, zero mean and covariance diag(λ 1 , . . . , λ M ). Let U ∼ N (0, I) and define Z i = 1 -γ 2 i X i + γ i λ i U i , X ∼ P (X | Z). ( ) where γ 2 i = min(1, θ/λ i ) for some θ. Further, let X * be a Gaussian random variable with the same first and second-order moments as X and let Z * be defined analogously to Z but in terms of X * . Then if R is the rate-distortion function of X and R * is the rate-distortion function of X * , I[X, Z] ≤ R * (D/2) -D KL [P Z P Z * ] ≤ R(D/2) + D KL [P X P X * ] -D KL [P Z P Z * ] (17) where D = E[ X -X 2 ]. Proof. See Appendix E. In line with expectations, this result implies that when X is approximately Gaussian, the rate of DiffC-A * is not far from the rate of the best deterministic encoder, R(D/2). It further implies that the rate is close to R(D/2) in the high bitrate regime if the differential entropy of X is finite. This can be seen by noting that the second KL divergence will approach the first KL divergence as the rate increases, since P Z * = P X * and the distribution of Z will be increasingly similar to X. Our next result compares the error of DiffC-F with DiffC-A's at the same bitrate. For simplicity, we assume that X has a smooth density and further consider the following measure of smoothness, G = E ∇ ln p(X) 2 . ( ) Among distributions with a continuously differentiable density and unit variance, the standard normal distribution minimizes G and achieves G = 1. For comparison, the Laplace distribution has G = 2. (Alternatively, imagine a sequence of smooth approximations converging to the Laplace density.) For discrete data such as RGB images, we may instead consider the distribution of pixels with an imperceptible amount of Gaussian noise added to it (see also Fig. 5 in Appendix F). Theorem 2. Let X : Ω → R M have a smooth density p with finite G (Eq. 18). Let Z t be defined as in Eq. 1, XA ∼ P (X | Z t ) and let XF = Ẑ0 be the solution to Eq. 6 with Z t as initial condition. Then lim σt→0 E[ XF -X 2 ] E[ XA -X 2 ] = 1 2 (19) Proof. See Appendix F. This result implies that in the limit of high bitrates, the error of a flow-based reconstruction is only half that of the the reconstruction obtained with ancestral sampling from a perfect model. This is consistent with Fig. 2 , where we can observe an advantage of roughly 3 dB of DiffC-F over DiffC-A. Finally, we provide conditions under which a flow-based reconstruction is provably the best reconstruction from input corrupted by Gaussian noise. Theorem 3. Let X = QS where Q is an orthogonal matrix and S : Ω → R M is a random vector with smooth density and S i ⊥ ⊥ S j for all i = j. Define Z t as in Eq. 1. If XF = Ẑ0 is the solution to the ODE in Eq. 6 given Z t as initial condition, then E[ XF -X 2 ] ≤ E[ X -X 2 ] (20) for any X with X ⊥ ⊥ X | Z t which achieves perfect realism, X ∼ X. Proof. See Appendix G. 

5. EXPERIMENTS

As a proof of concept, we implemented DiffC based on VDMfoot_1 (Kingma et al., 2021) . VDM is a diffusion model which was optimized for log-likelihood (i.e., lossless compression) but not for perceptual quality. This suggests VDM should work well in the high bitrate regime but not necessarily at lower bitrates. Nevertheless, we find that we achieve surprisingly good performance across a wide range of bitrates. We used exactly the same network architecture and training setup as Kingma et al. (2021) except with a smaller batch size of 64 images and training our model for only 1.34M updates (instead of 2M updates with a batch size of 512) due to resource considerations. We used 1000 diffusion steps.

5.1. DATASET, METRICS, AND BASELINES

We used the downsampled version of the ImageNet dataset (Deng et al., 2009 ) (64x64 pixels) first used by van den Oord et al. (2016) . The test set of ImageNet is known to contain many duplicates and to overlap with the training set (Kolesnikov et al., 2019) . For a more meaningful evaluation (especially when comparing to non-neural baselines), we removed 4952 duplicates from the validation set as well as 744 images also occuring in the training set (based on SHA-256 hashes of the images). On this subset, we measured a negative ELBO of 3.48 bits per dimension for our model. We report FID (Heusel et al., 2017) and PSNR scores to quantify the performance of the different approaches. As is common in the compression literature, in this section we calculate a PSNR score for each image before averaging. For easier comparison with our theoretical results, we also offer PSNR scores calculated from the average MSE (Appendix I) although the numbers do not change markedly. When comparing bitrates between models, we used estimates of the upper bound given by Eq. 4 for DiffC. We compare against BPG (Bellard, 2018) , a strong non-neural image codec based on the HEVC video codec which is known for achieving good rate-distortion results. We also compare against HiFiC (Mentzer et al., 2020) , which is the state-of-the-art generative image compression model in terms of visual quality on high-resolution images. The approach is optimized for a combination of LPIPS (Zhang et al., 2018) , MSE, and an adversarial loss (Goodfellow et al., 2014) . The architecture of HiFiC is optimized for larger images and uses significant downscaling. We found that adapting the architecture of HiFiC slightly by making the last/first layer of the encoder/decoder have stride 1 instead of stride 2 improves FID on ImageNet 64x64 compared to the publicly available model. In addition to training the model from scratch, we also tried initializing the non-adapted filters from the public model and found that this improved results slightly. We trained 5 HiFiC models targeting 5 different bitrates.

5.2. RESULTS

We find that DiffC-F gives perceptually pleasing results even at extremely low bitrates of around 0.2 bits per pixel (Fig. 3 ). Reconstructions are also still perceptually pleasing when the PSNR is relatively low at around 22 dB (e.g., compare to BPG in Fig. 1B ). We further find that at very low bitrates, HiFiC produces artefacts typical for GANs while we did not observe similar artefacts with DiffC. Similar conclusions can be drawn from our quantitative comparison, with DiffC-F significantly outperforming HiFiC in terms of FID. FID scores of DiffC-A were only slightly worse (Fig. 4A ). At high bitrates, DiffC-F achieves a PSNR roughly 2.4 dB higher than DiffC-A. This is line with our theoretical predictions (3 dB) considering that the diffusion model only approximates the true distribution. PSNR values of DiffC-F and DiffC-A both exceed those of HiFiC and BPG, suggesting that Gaussian diffusion works well in a rate-distortion sense even for highly non-Gaussian distributions (Fig. 4B ). Additional results are provided in Appendix I, including results for progressive coding and HiFiC trained for MSE only.

6. DISCUSSION

We presented and analyzed a new lossy compression approach based on diffusion models. This approach has the potential to greatly simplify lossy compression with realism constraints. Where typical generative approaches use an encoder, a decoder, an entropy model, an adversarial model and another model as part of a perceptual distortion loss, and train multiple sets of models targeting different bitrates, DiffC only uses a single unconditionally trained diffusion model. The fact that adding Gaussian noise to pixels achieves great rate-distortion performance raises interesting questions about the role of the encoder transform in lossy compression. Nevertheless, we expect further improvements are possible in terms of perceptual quality by applying DiffC in a latent space. Applying DiffC in a lower-dimensional transform space would also help to reduce its computational cost (Vahdat et al., 2021; Rombach et al., 2021; Gu et al., 2021; Pandey et al., 2022) . The high computational cost of DiffC makes it impractical in its current form. Generating a single image with VDM requires many diffusion steps, each involving the application of a deep neural network. However, speeding up diffusion models is a highly active area of research (e.g., Watson et al., 2021; Vahdat et al., 2021; Jolicoeur-Martineau et al., 2021; Kong and Ping, 2021; Salimans and Ho, 2022; Zhang and Chen, 2022) . For example, Salimans and Ho (2022) were able to reduce the number of diffusion steps from 1000 to around 4 at comparable sample quality. The computational cost of communicating a sample using the approach of Li and El Gamal (2018) grows exponentially with the coding cost. However, reverse channel coding is another active area of research (e.g., Havasi et al., 2019; Agustsson and Theis, 2020; Flamich et al., 2020) and much faster methods already exist for low-dimensional Gaussian distributions (Theis and Yosri, 2022; Flamich et al., 2022) . Our work offers strong motivation for further research into more efficient reverse channel coding schemes. As mentioned in Section 3, reverse channel coding may be applied after each diffusion step to send a sample of q(z t | Z t+1 , X), or alternatively to the joint distribution q(z T :t | X). The former approach has the advantage of lower computational cost due to the exponential growth with the coding cost. Furthermore, the model's score function only needs to be evaluated once per diffusion step to compute a conditional mean while the latter approach requires many more evaluations (one for each candidate considered by the reverse channel coding scheme). Fig. 10 shows that this approach-which is already much more practical-still significantly outperforms HiFiC. Another interesting avenue to consider is replacing Gaussian q(z t | Z t+1 , X) with a uniform distribution, which can be simulated very efficiently (e.g., Zamir and Feder, 1996; Agustsson and Theis, 2020) . We provided an initial theoretical analysis of DiffC. In particular, we analyzed the Gaussian case and proved that DiffC-A * performs well when either the data distribution is close to Gaussian or when the bitrate is high. In particular, the rate of DiffC-A * approaches R(D/2) at high birates. We further proved that DiffC-F can achieve 3 dB better SNR at high bitrates compared to DiffC-A. Taken together, these results suggest that R(D) may be achievable at high bitrates where current approaches based on nonlinear transform coding can only achieve R(D/2). However, many theoretical questions have been left for future research. For instance, how does the performance of DiffC-A differ from DiffC-A * ? And can we extend Theorem 3 to prove optimality of a flow-based reconstruction from noisy data for a broader class of distributions?

REPRODUCIBILITY STATEMENT

Our appendix provides extensive proofs of the claims made in our paper. Every effort has been made to make explicit the assumptions in our claims. The models used in our empirical results (VDM, HiFiC) are based on open source code. We further provide code to reproduce the results of Fig. 2 .

ETHICS STATEMENT

Lossy compression achieving perfect realism produces reconstructions which are indistinguishable from real data and which hide a loss of information. This property is desirable in applications such as video streaming for entertainment purposes but can be problematic in applications such as surveillance, medical imaging, or document compression where the image content influences critical decisions. The availability of better generative compression methods increases the risk of misuse in these areas. G. Zhang, J. Qian, J. Chen, and A. J. Khisti For completeness, we here reproduce pseudocode by Theis and Yosri (2022) of the sampling scheme first considered by Maddison (2016) and later by Li and El Gamal (2018) for the purpose of reverse channel coding. Similar to rejection sampling, the encoding process accepts a candidate generating distribution p, a target distribution q, and a bound on the density ratio w min ≤ inf z p(z) q(z) . ( ) The encoding process returns a index N * (random due to the exponential noise) such that Z N * follows the distribution q (Algorithm 1). Importantly, the algorithm produces an exact sample in a finite number of steps (Maddison, 2016) . Furthermore, the coding cost of N * is bounded by (Li and El Gamal, 2018 ) H[N * ] + 1 < I[X, Z] + log(I[X, Z] + 1) + 5 (22) and this bound can be achieved by entropy encoding N * with a Zipf distribution p λ (n) ∝ n -λ which has a single parameter λ = 1 + 1 I[X, Z] + e -1 log e + 1 . In practice, the coding cost for Gaussians may be significantly lower than this bound. In the encoding process, the function simulate(n, p) returns the nth candidate Z n ∼ p (in practice, this would be achieved with a pseudo-random number generator though we could also imagine a large list of previously generated and shared samples). The function exponential(1) produces a random sample from an exponential distribution with rate 1. Unlike encoding, decoding is fast as it only amounts to selecting the right candidate once N * has been received (Algorithm 2).

B NORMAL DISTRIBUTION WITH NON-UNIT VARIANCE

The case of a 1-dimensional Gaussian with varying variance is essentially the same as for a standard normal. In both cases, a Gaussian source is communicated through a Gaussian channel. Let X ∼ N (0, λ) and Z = 1 -σ 2 X + σU ( ) where as before U ∼ N (0, 1). Let X ∼ P (X | Z). We define σ2 = σ 2 σ 2 + λ -λσ 2 . ( ) Then I[X, Z] = h[Z] -h[Z | X] = 1 2 log λ -σ 2 λ + σ 2 - 1 2 log σ 2 = -log σ and D = E[(X -X) 2 ] = λE[(λ -1 2 X -λ -1 2 X) 2 ] = 2λσ 2 (27) due to Eq. 10 which tells us the squared error of the standard normal λ -1 2 X as a function of the information contained in Z. Taken together, we again have I[X, Z] = 1 2 log λ D/2 = R * (D/2). C MULTIVARIATE GAUSSIAN Let X ∼ N (0, Σ) and let Z = 1 -σ 2 X + σU (29) where U ∼ N (0, I). Note that both the mutual information and the squared error are invariant under rotations of X. We can therefore assume that the covariance is diagonal, Σ = diag(λ 1 , . . . , λ M ). ( ) Defining σ2 i = σ 2 σ 2 + λ i -λ i σ 2 . ( ) as in Appendix B, we have D = E[ X -X 2 ] = i E[(X i -Xi ) 2 ] = 2 i λ i σ2 i ( ) and I[X, Z] = i I[X i , Z i ] = - 1 2 i log σ2 i ( ) Let Di = λ i σ2 i , then we can write D = 2 i Di , I[X, Z] = 1 2 i log λ i Di . For fixed distortion D, the rate in Eq. 34 as a function of Di is known to be minimized by the so-called reverse water-filling solution given in Eq. 8 (Shannon, 1949; Cover and Thomas, 2006) , that is, D i = min(λ i , θ) where θ must be chosen so that D = 2 i D i . Hence, Di as defined above is generally suboptimal and we must have I[X, Z] ≥ 1 2 i log λ i D i = R * (D/2). ( ) D OPTIMAL NOISE SCHEDULE FOR FLOW-BASED RECONSTRUCTION Lemma 1. Let X ∼ N (0, Σ) with diagonal covariance matrix Σ = diag(λ 1 , . . . , λ M ) (36) and λ i > 0. Further let Z i = α i X i + 1 -α 2 i λ i U i , X = Z, ( ) where U ∼ N (0, I) and α i = λ 2 i + θ 2 -θ /λ i . ( ) Then X achieves the minimal rate at distortion level D = E[ X -X 2 ] among all reconstructions satisfying the realism constraint X ∼ X. Proof. The lowest rate achievable by a code (with access to a source of shared randomness) is (Theis and Wagner, 2021 ) inf X I[X, X] subject to E[ X -X ] ≤ D and X ∼ X. ( ) We can rewrite the rate as inf X I[X, X] = inf X h[X] + h[ X] -h[X, X] = 2h[X] -sup X h[X, X]. ( ) That is, we need to maximize the differential entropy of (X, X) subject to constraints. For the distortion constraint, we have E[ X -X 2 ] = E[X X] + E[ X X] -2E[X X] = 2 i λ i -2 i E[X i Xi ] ≤ D. ( ) We will first relax the realism constraint to the weaker constraints below and then show that the solution also satisfies the stronger realism constraint: E[ X] = 0, E[ X2 i ] = λ i . ( ) Consider relaxing the problem even further and jointly optimize over both X and X with constraints on the first and second moments of both random variables. The joint maximum entropy distribution then takes the form p(x, x) ∝ exp i β i x i + i γ i xi + i µ i x 2 i + i ν i x2 i + i ζ i x i xi , that is, it is Gaussian with a precision matrix which has zeros everywhere except the diagonal and the off-diagonals corresponding to the interactions between X i and Xi . In other words, the joint precision matrix S consists of four blocks where each block is a diagonal matrix. It is not difficult to see by blockwise inversion that then the covariance matrix C = S -1 must have the same structure. Let C X X be the diagonal matrix corresponding to the covariance between X and X. We need to maximize h[X, X] + const ∝ ln |C| (44) = ln |C XX C X X -C XX C X X| (45) = i ln(λ 2 i -E[X i Xi ] 2 ) (46) subject to i E[X i Xi ] ≥ i λ i - D 2 . ( ) Let c i = E[X i Xi ] and form the Lagrangian L(c, η, µ) = 1 2 i ln(λ 2 i -c 2 i ) + η i c i - i λ i + D 2 + i µ i c i , where the last term is due to the constraint c i ≥ 0. The KKT conditions are ∂L ∂c i = - c i λ 2 i -c 2 i + η + µ i = 0, η i c i - i λ i + D 2 = 0, µ i c i = 0, ( ) i c i - i λ i + D 2 ≥ 0, c i ≥ 0, µ i ≥ 0, η ≥ 0, yielding c i = 0 or c i = λ 2 i + 1 4η 2 - 1 2η . If c i = 0 for some i, then µ i = -η (Eq. 49) and therefore µ i = η = 0 by Eq. 52. But then c i = 0 by Eq. 49 for all i. By Eq. 51, we must then have D ≥ 2 i λ i . This implies that for sufficiently large distortion, we must have c i > 0 for all i. Defining θ = (2η) -1 gives that for X to be optimal, we must have E[X i Xi ] = c i = λ 2 i + θ 2 -θ for some θ determined by D. Summarizing what we have so far, we have shown that (under relaxed realism constraints) the rate is minimized by a random variable X jointly Gaussian with X and a covariance matrix whose entries are zero except those specified by Eqs. 42 and 55. Since the marginal distribution of X is Gaussian with the desired mean and covariance, it also satisfies the stronger realism constraint X ∼ X. On the other hand, X ∼ X | X. In particular, E[X i Xi ] = E[X i Z i ] (56) = E[α i X i X i + 1 -α 2 i λ i X i U i ] (57) = α i E[X 2 i ] (58) = α i λ i (59) = λ 2 i + θ 2 -θ. has the desired property. Thus, X minimizes the rate at any given level of distortion.

E PROOF OF THEOREM 1

Lemma 2. Let X : Ω → R M be a random variable with finite differential entropy and let X * be a Gaussian random variable with matching first and second-order moments. Let R(D) be the rate-distortion function of X and R * (D) be the rate-distortion function of X * . Then R * (D) ≤ R(D) + D KL [P X P X * ]. Proof. Zamir and Feder (1996) proved the result for M = 1. We here extend the proof to M > 1. First, observe that D KL [P X P X * ] = E[-log p X * (X)] -h[X] (62) = E[-log p X * (X * )] -h[X] (63) = h[X * ] -h[X] since log p X * is a quadratic form and X and X * have matching moments. By the Shannon lower bound (Wu, 2016) , R(D) ≥ h[X] - M 2 log 2πe D M . Let λ 1 , . . . , λ M be the eigenvalues of the covariance of X. Then by Eqs. 64 and 65 we have R(D) ≥ h[X * ] -D[P X , P X * ] - M 2 log 2πe D M (66) = i 1 2 log λ i D/M -D[P X , P X * ] (67) ≥ i 1 2 log λ i D i -D[P X , P X * ] (68) = R * (D) -D[P X , P X * ]. where D i = min(θ, λ i ) and θ is such that D = i D i . The inequality follows from the optimality of the water-filling solution given by the D i (Shannon, 1949; Cover and Thomas, 2006) . Bringing the KL divergence to the other side of the equation gives the desired result. Theorem 1. Let X : Ω → R M be a random variable with finite differential entropy, zero mean and covariance diag(λ 1 , . . . , λ M ). Let U ∼ N (0, I) and define Z i = 1 -γ 2 i X i + γ i λ i U i , X ∼ P (X | Z). ( ) where γ 2 i = min(1, θ/λ i ) for some θ. Further, let X * be a Gaussian random variable with the same first and second-order moments as X and let Z * be defined analogously to Z but in terms of X * . Then if R is the rate-distortion function of X and R * is the rate-distortion function of X * , I[X, Z] ≤ R * (D/2) -D KL [P Z P Z * ] (71) ≤ R(D/2) + D KL [P X P X * ] -D KL [P Z P Z * ] where D = E[ X -X 2 Proof. We have D i = E[(X i -Xi ) 2 ] (73) = E[(X i -E[X i | Z] + E[X i | Z] -Xi ) 2 ] (74) = E[(X i -E[X i | Z]) 2 + (E[X i | Z] -Xi ) 2 -2(X i -E[X i | Z])(E[X i | Z] -Xi )] (75) = E[(X i -E[X i | Z]) 2 ] + E[(E[ Xi | Z] -Xi ) 2 ] (76) -2E Z [ E Xi [X i -E[X i | Z] | Z]E Xi [E[X i | Z] -Xi | Z]] (77) = 2E[(X i -E[X i | Z]) 2 ] (78) ≤ 2E[(X i -1 -γ 2 i Z i ) 2 ] (79) = 2E[(1 -(1 -γ 2 i ))X i -1 -γ 2 i γ i λ i U i ) 2 ] (80) = 2(Var[γ 2 i X i ] + Var[ 1 -γ 2 i γ i λ i U i ]) (81) = 2(γ 4 i λ i + (1 -γ 2 i )γ 2 i λ i ) (82) = 2γ 2 i λ i , where in Eq. 77 we used that X ⊥ ⊥ X | Z and in Eq. 78 we used that (X, Z) ∼ ( X, Z). Eq. 79 follows because the conditional expectation minimizes the squared error among all estimators of X i . For the overall distortion, we therefore have D = E[ X -X 2 ] = i D i ≤ 2 i γ 2 i λ i = D * . Note that D * is the distortion we would have gotten if X were Gaussian (Eq. 14). Define V i = (1 -γ 2 i ) -1 2 γ i λ i U i and Y i = (1 -γ 2 i ) -1 2 Z i = X i + V i and let Y * be the Gaussian random variable defined analogously to Y except in terms of Z * instead of Z. To obtain the rate, first observe that where the first step and last step follow from the invariance of mutual information and KL divergence under invertible transformations. Eq. 94 follows because log p Y * is a quadratic form and Y and Y * have matching moments. We therefore have I[X * , Z * ] = I[X * , Y * ] (86) = I[X * , X * + V] (87) = h[X * + V] -h[X * + V | X * ] (88) = h[X * + V] -h[V] (89) = h[X + V] -h[V] + h[X * + V] -h[X + V] (90) = h[X + V] -h[X + V | X] + h[X * + V] -h[X + V] (91) = I[X, X + V] -E[log p Y * (Y * )] + E[log p Y (Y)] (92) = I[X, Y] -E[log p Y * (Y * )] + E[log p Y (Y)] (93) = I[X, Y] -E[log p Y * (Y)] + E[log p Y (Y)] (94) = I[X, Y] + D KL [P Y || P Y * ] (95) = I[X, Z] + D KL [P Z || P Z * ]. ( ) I[X, Z] = I[X * , Z * ] -D KL [P Z || P Z * ] (97) = R * (D * /2) -D KL [P Z || P Z * ] (98) ≤ R * (D/2) -D KL [P Z || P Z * ] (99) ≤ R(D/2) + D KL [P X || P X * ] -D KL [P Z || P Z * ], where the second equality is due to Eq. 14, the first inequality is due to D ≤ D * , and the second inequality follows from the Shannon lower bound and Lemma 2.

F PROOF OF THEOREM 2

Theorem 2 compares the reconstruction error of XA ∼ P (X | Z t ) with XF = Ẑ0 where Ẑ0 is the solution of dz t = - 1 2 β t z t - 1 2 β t ∇ ln p t (z t ) dt. given Z t = 1 + σ 2 t X + σ t U as initial condition. To derive the following results it is often more convenient to work with the "variance exploding" diffusion process Y t = (1 -σ 2 t ) -1 2 Z t ∼ X + (1 -σ 2 t ) -1 2 σ t U = X + η t U instead of the "variance preserving" process Z t (Song et al., 2021) . We use pt for the marginal density of Y t and reserve p t for the density of Z t . We further define the following quantity for continuously differentiable densities, which can be viewed as a measure of smoothness of a density, G = E[ ∇ ln p(X) 2 ]. It can be shown that when E[ X 2 ] = 1, we have G ≥ M with equality when X is isotropic Gaussian. That is, the isotropic Gaussian is the smoothest distribution (with a continuously differentiable density) as measured by G. We further define G t = E[ ∇ z log p t (Z t ) 2 ] and Gt = E[ ∇ y log pt (Y t ) 2 ] ( ) which are linked by the chain rule, Gt = (1 -σ 2 t )G t . Using a trained diffusion model, we can obtain estimates of G t for ImageNet 64x64, which are shown in Fig. 5 . and therefore Gt = E[ ∇ y ln pt (Y t ) 2 ] (110) = E[ E[∇ y ln p 0 (Y t -η t U) | Y t ] 2 ] (111) ≤ E[E[ ∇ y ln p 0 (Y t -η t U) 2 | Y t ]] (112) = E[ ∇ x ln p 0 (X) 2 ] (113) = G 0 due to Jensen's inequality. We will also need the following known useful identity which is a special case of Tweedie's formula (Robbins, 1956) . We include a derivation for completeness. Lemma 4. Let X have a density and let Y t , pt , and η t be defined as above. Then E[X | y t ] = y t + η 2 t ∇ ln pt (y t ). Proof. ∇ y log pt (y t ) = p(x | y t )∇ y log pt (y t ) dx (116) = p(x | y t )∇ y log pt (y t )p(x | y t ) p(x | y t ) dx (117) = p(x | y t )∇ y log (p(y t | x)p(x)) dx (118) -p(x | y t )∇ y log p(x | y t ) dx (119) = p(x | y t )∇ y log p(y t | x) dx (120) = p(x | y t )∇ y log N y t ; x, η 2 t I dx (121) = p(x | y t ) 1 η 2 t (x -y t ) dx (122) = 1 η 2 t (E[X | y t ] -y t ) The following three lemmas relate the reconstruction errors of XA and XF to the smoothness of the source distribution as measured by G 0 and G t . Lemma 5. Let X have a density and let XA , η t , and G t be defined as above. Then E[ XA -X 2 ] = 2η 2 t M -2η 4 t Gt (124) = 2η 2 t M -2η 4 t (1 -σ 2 t )G t (125) Proof. E[ XA -X 2 ] = E[ XA -Y t + Y t -X 2 ] (126) = E[ XA -Y t 2 + Y t -X 2 + 2( XA -Y t ) (Y t -X)] (127) = E[ XA -Y t 2 ] + E[ Y t -X 2 ] (128) + 2E[E[ XA -Y t | Y t ] E[Y t -X | Y t ]] (129) = E[ X -Y t 2 ] + E[ Y t -X 2 ] (130) + 2E[E[X -Y t | Y t ] E[Y t -X | Y t ]] (131) = 2E[ Y t -X 2 ] -2E[||E[Y t -X | Y t ]|| 2 ] (132) = 2E[ η t U t 2 ] -2E[||Y t -E[X | Y t ]|| 2 ] (133) = 2η 2 t M -2E[||η 2 t ∇ y ln pt (Y t )|| 2 ] (134) = 2η 2 t M -2E[||η 2 t (1 -σ 2 t ) 1 2 ∇ z ln p t (Z t )|| 2 ] (135) = 2η 2 t M -2η 4 t (1 -σ 2 t )G t Lemma 6. Let X have a density and let XF , η t , and G 0 be defined as above. Then E[ XF -Y t 2 ] ≤ 1 4 η 4 t G 0 . Proof. We have E[ XF -Y t 2 ] = E[ F -1 t (Y t ) -Y t 2 ] = E[ X -F t (X) 2 ] ( ) where F t is the invertible function which maps x to y t according to the ODE dy t = -α t ∇ ln p t (y t ) dt, y 0 = x, where α t relates to η t as follows, η t = 2 t 0 α τ dτ . Different schedules are equivalent up to reparametrization of the time parameter (Kingma et al., 2021) . For now, assume the parametrization α t = 1 (or η t = √ 2t). Integrating the above ODE then yields y t = F t (x) = x - t 0 ∇ ln p τ (y τ ) dτ . Consider the following Riemann sum approximation of F t , F t,N (x) = x - N -1 n=0 t N ∇ ln p tn (y tn ) where t n = nt/N and y tn = F tn (x). Since the gradient of the log-density is continuous and the integral is over a compact interval, the partial derivatives are bounded inside the interval and the Riemann sum converges to F t (x) = lim N →∞ F t,N (x). Thus, E[ X -F t (X) 2 ] = E[ X -lim N →∞ F t,N (X) 2 ] (144) = E   lim N →∞ N -1 n=0 t N ∇ ln ptn (Y tn ) 2   (145) = E   lim N →∞ N -1 n=0 t N ∇ ln ptn (Y tn ) 2   (146) ≤ E lim N →∞ N -1 n=0 1 N t∇ ln ptn (Y tn ) 2 (147) = lim N →∞ t 2 N N -1 n=0 E ∇ ln ptn (Y tn ) 2 (148) = lim N →∞ t 2 N N -1 n=0 Gtn (149) ≤ lim N →∞ t 2 N N -1 n=0 G 0 (150) = t 2 G 0 (151) = 1 4 η 4 t G 0 , Eq. 147 again uses Jensen's inequality. Eq. 148 (swapping limit and expectation) follows from the dominated convergence theorem since each element of the sequence is bounded by t 2 G 0 (Lemma 3). Lemma 7. Let X have a smooth density and let XF , η t , and G 0 be defined as above. Then E[ XF -X 2 ] ≤ η 2 t M + 1 2 η 4 t G 0 + 2η 4 t (1 -σ 2 t )G t . Proof. E[ XF -X 2 ] = E[ XF -E[X | Y t ] + E[X | Y t ] -X 2 ] (154) = E[ XF -E[X | Y t ] 2 ] + E[ E[X | Y t ] -X 2 ] + 0 (155) ≤ E[ XF -E[X | Y t ] 2 ] + E[ Y t -X 2 ] (156) = E XF -Y t + η 2 t ∇ ln pt (Y t ) 2 + E[ η t U 2 ] (157) = E 4 1 2 ( XF -Y t ) + 1 2 η 2 t ∇ ln pt (Y t ) 2 + η 2 t M (158) ≤ 2E[ XF -Y t 2 ] + 2E[ η 2 t ∇ ln pt (Y t ) 2 ] + η 2 t M (159) = 2E[ XF -Y t 2 ] + 2η 4 t (1 -σ 2 t )G t + η 2 t M (160) ≤ 1 2 η 4 t G 0 + 2η 4 t (1 -σ 2 t )G t + η 2 t M where the first inequality is due to the conditional expectation minimizing squared error, the second inequality is due to Jensen's inequality and the last inequality is due to Lemma 6. We are finally in a position to prove Theorem 2. Theorem 2. Let X : Ω → R M have a smooth density p with finite G = E[ ∇ ln p(X) 2 ]. Let Z t = 1 -σ 2 t X + σ t U with U ∼ N (0, I). Let XA ∼ P (X | Z t ) and let XF = Ẑ0 be the solution to Eq. 6 with Z t as initial condition. Then lim σt→0 E[ XF -X 2 ] E[ XA -X 2 ] = 1 2 (163) Proof. The limit is to be understood as the one-sided limit from above. We have lim σt→0 E[ XF -X 2 ] E[ XA -X 2 ] ≤ lim σt→0 η 2 t M + 1 2 η 4 t G 0 + 2η 4 t (1 -σ 2 t )G t 2η 2 t M -2η 4 t (1 -σ 2 t )G t (164) ≤ lim σt→0 η 2 t M + 1 2 η 4 t G 0 + 2η 4 t G 0 2η 2 t M -2η 4 t G 0 (165) = lim ηt→0 2η t M + 2η 3 t G 0 + 8η 3 t G 0 4η t M -8η 3 t G 0 (166) lim ηt→0 2M + 6η 2 t G 0 + 24η 2 t G 0 4M -24η 2 t G 0 (167) = 2M 4M (168) = 1 2 where the first inequality follows from Lemmas 5 and 7, the second inequality is due to Lemma 3, and we applied L'Hôpital's rule twice.

G PROOF OF THEOREM 3

Theorem 3. Let X = QS where Q is an orthogonal matrix and S : Ω → R M is a random vector with smooth density and S i ⊥ ⊥ S j for all i = j. Define Z t = 1 -σ 2 t X + σ t U where U ∼ N (0, I). (170) If XF = Ẑ0 is the solution to the ODE in Eq. 6 given Z t as initial condition, then E[ XF -X 2 ] ≤ E[ X -X 2 ] (171) for any X with X ⊥ ⊥ X | Z t which achieves perfect realism, X ∼ X. Proof. Define the variance exploding diffusion process as dY t = ζ t dW t with (1 -σ 2 t ) -1 2 σ 2 t = t 0 ζ τ dτ (172) so that Y t = (1 -σ 2 t ) -1 2 Z t ∼ X + (1 -σ 2 t ) -1 2 σ 2 t U = X + η t U. (173) Further define F t as the function which maps x to the solution of the ODE in Eq. 6 with starting condition z 0 = x. Then F t is invertible (Song et al., 2021) and we can write XF = F -1 t (Z t ). Further, let Ft be the corresponding function for the variance exploding process such that F -1 t (y) = F -1 t 1 -σ 2 t y , XF = F -1 t (Y t ). For arbitrary X with X ⊥ ⊥ X | Y t , we have E[ X -X 2 ] = E[ X -E[X | Y t ] + E[X | Y t ] -X 2 ] (175) = E[ X -E[X | Y t ] 2 ] + E[ E[X | Y t ] -X 2 ] (176) + E[( X -E[X | Y t ]) (E[X | Y t ] -X)] (177) = E[ X -E[X | Y t ] 2 ] + E[ E[X | Y t ] -X 2 ] (178) + E Yt [E X[ X -E[X | Y t ] | Y t ] ( ( ( ( ( ( ( ( ( ( ( E X [E[X | Y t ] -X | Y t ]] (179) = E[ X -E[X | Y t ] 2 ] + E[ E[X | Y t ] -X 2 ] (180) Define XMSE = ψ t (Y t ) = E[X | Y t ]. Assume M = 1 so that X = S. We first show that then ψ t is a monotone function of y t : where the infimum is over all random variables with the same marginal distribution as X and which may depend on XMSE (or equivalently may depend on Y t ). The solution to this problem is known from transportation theory to be X * = Φ -1 0 (Φ t ( XMSE )) (e.g., Kolouri et al., 2019) , where Φ 0 is the CDF of X, Φ t is the CDF of X MSE , and it is assumed that the measure of X is absolutely continuous Figure 9 : Performance relative to an upper bound on the coding cost when progressively communicating information in chunks of B bits using the approach of Li and El Gamal (2018) . The coding cost is estimated using Ct B (B + log(B + 1) + 5), where C t is the total amount of information sent (Eq. 5). At 10 bits the PSNR is comparable to our strongest baseline but the FID remains significantly lower.  ψ (y t ) = ∂ ∂y E[X | y t ]



The bitrate given by an information rate-distortion may only be achievable asymptotically by encoding many data points jointly. To keep our discussion focused, we ignore any potential overhead incurred by one-shot coding and use mutual information as a proxy for the rate achieved in practice. https://github.com/google-research/vdm



Figure 1: A: A visualization of lossy compression with unconditional diffusion models. B: Bitrates (bits per pixel; black) and PSNR scores (red) of various approaches including JPEG (4:2:0, headerless) applied to images from the validation set of ImageNet 64x64. For more examples see Appendix I.

Figure 3: Top images visualize messages communicated at the estimated bitrate (bits per pixel) shown in black. The bottom row shows reconstructions produced by DiffC-F and corresponding PSNR values are shown in red.

Figure 4: A comparison of DiffC with BPG and the GAN-based neural compression method HiFiC in terms of FID and PSNR on ImageNet 64x64.

Figure5: While G 0 is undefined for images with discretized pixels, we may instead consider the distribution of pixels with imperceptible Gaussian noise added to it. We can estimate the corresponding G t using the diffusion model, the results of which are shown in this plot. G t converges to M as σ t approaches 1.

Figure 8: PSNR values in Section 5 were computed by calculating a PSNR score for each image and averaging. In contrast, this plot shows PSNR values corresponding to the average MSE.

Figure 10: This figure contains additional results for HiFiC trained from scratch for MSE only. We only targeted a single bit-rate. The PSNR improves slightly while the FID score gets significantly worse. 29

Lemma 3. Diffusion increases the smoothness of a distribution, Gt ≤ G 0 .Proof. We have ∇ y ln pt (y t ) = p(u | y t )∇ y ln pt (y) du

) is the Fisher information of y t . Assume ψ (y t ) = 0 for some y t . Then This implies X is almost surely constant, that is, p(x | y t ) is a degenerate distribution. But this contradicts our assumption that p(x) is smooth. Since p(y t | x) is Gaussian with mean x and therefore smooth as a function of x, p(x | y t ) ∝ p(x)p(y t | x) must also be smooth. Hence, we must have ψ (y t ) > 0 everywhere. Since ψ(y t ) is strictly monotone it is also invertible.

annex

with respect to the Lebesgue measure. We have= P (ψ -1 t ( XMSE ) ≤ Ft (x))= P ( XMSE ≤ ψt ( Ft (x))) (200)implyingand therefore that XF is optimal.] is invariant under the choice of Q, we can assume Q = I without changing the results of our analysis so that X = S and (X i , Y ti ) ⊥ ⊥ (X j , Y tj ) for i = j. Since thenthe ODE (Eq. 6) can be decomposed into M separate problemsfor which we already know the solution is of the form z ti = (1 -σ 2 t )y ti withwhereOn the other hand,That is, XF minimizes the squared error among all reconstructions achieving perfect realism. The second inequality follows due to the weaker constraint on the right-hand side; X ∼ X implies Xi ∼ X i but not vice versa. Eqs. 214 and 215 follow from our proof of the case M = 1.

H COMPUTE RESOURCES

Training VDM took about 13 days using 32 TPUv3 cores (https://cloud.google.com/tpu).No hyperparameter searches were performed to tune VDM for this paper. Training one HiFiC model took about 4 days using 2 V100 GPUs and we trained 10 models targeting 5 different bitrates (with and without pretrained weights). A few additional training runs were performed for HiFiC to tune the architecture (reducing the stride) while targeting a single bitrate. 

