WAVEGRAD: ESTIMATING GRADIENTS FOR WAVEFORM GENERATION

Abstract

This paper introduces WaveGrad, a conditional model for waveform generation which estimates gradients of the data density. The model is built on prior work on score matching and diffusion probabilistic models. It starts from a Gaussian white noise signal and iteratively refines the signal via a gradient-based sampler conditioned on the mel-spectrogram. WaveGrad offers a natural way to trade inference speed for sample quality by adjusting the number of refinement steps, and bridges the gap between non-autoregressive and autoregressive models in terms of audio quality. We find that it can generate high fidelity audio samples using as few as six iterations. Experiments reveal WaveGrad to generate high fidelity audio, outperforming adversarial non-autoregressive baselines and matching a strong likelihood-based autoregressive baseline using fewer sequential operations. Audio samples are available at https://wavegrad.github.io/.

1. INTRODUCTION

Deep generative models have revolutionized speech synthesis (Oord et al., 2016; Sotelo et al., 2017; Wang et al., 2017; Biadsy et al., 2019; Jia et al., 2019; Vasquez & Lewis, 2019) . Autoregressive models, in particular, have been popular for raw audio generation thanks to their tractable likelihoods, simple inference procedures, and high fidelity samples (Oord et al., 2016; Mehri et al., 2017; Kalchbrenner et al., 2018; Song et al., 2019; Valin & Skoglund, 2019) . However, autoregressive models require a large number of sequential computations to generate an audio sample. This makes it challenging to deploy them in real-world applications where faster than real time generation is essential, such as digital voice assistants on smart speakers, even using specialized hardware. There has been a plethora of research into non-autoregressive models for audio generation, including normalizing flows such as inverse autoregressive flows (Oord et al., 2018; Ping et al., 2019) , generative flows (Prenger et al., 2019; Kim et al., 2019) , and continuous normalizing flows (Kim et al., 2020; Wu & Ling, 2020) , implicit generative models such as generative adversarial networks (GAN) (Donahue et al., 2018; Engel et al., 2019; Kumar et al., 2019; Yamamoto et al., 2020; Bińkowski et al., 2020; Yang et al., 2020a; b; McCarthy & Ahmed, 2020) and energy score (Gritsenko et al., 2020) , variational auto-encoder models (Peng et al., 2020) , as well as models inspired by digital signal processing (Ai & Ling, 2020; Engel et al., 2020) , and the speech production mechanism (Juvela et al., 2019; Wang et al., 2020) . Although such models improve inference speed by requiring fewer sequential operations, they often yield lower quality samples than autoregressive models. This paper introduces WaveGrad, a conditional generative model of waveform samples that estimates the gradients of the data log-density as opposed to the density itself. WaveGrad is simple to train, and implicitly optimizes for the weighted variational lower-bound of the log-likelihood. WaveGrad is non-autoregressive, and requires only a constant number of generation steps during inference. Figure 1 visualizes the inference process of WaveGrad. WaveGrad builds on a class of generative models that emerges through learning the gradient of the data log-density, also known as the Stein score function (Hyvärinen, 2005; Vincent, 2011) . During inference, one can rely on the gradient estimate of the data log-density and use gradient-based samplers (e.g., Langevin dynamics) to sample from the model (Song & Ermon, 2019). Promising results have been achieved on image synthesis (Song & Ermon, 2019; 2020) and shape generation (Cai et al., 2020) . Closely related are diffusion probabilistic models (Sohl-Dickstein et al., 2015) , which capture the output distribution through a Markov chain of latent variables. Although these models do not offer tractable likelihoods, one can optimize a (weighted) variational lower-bound on the log-likelihood. The training objective can be reparameterized to resemble deonising score matching (Vincent, 2011) , and can be interpreted as estimating the data log-density gradients. The model is non-autoregressive during inference, requiring only a constant number of generation steps, using a Langevin dynamics-like sampler to generate the output beginning from Gaussian noise. The key contributions of this paper are summarized as follows: • WaveGrad combines recent techniques from score matching (Song et al., 2020; Song & Ermon, 2020) and diffusion probabilistic models (Sohl-Dickstein et al., 2015; Ho et al., 2020) 

2. ESTIMATING GRADIENTS FOR WAVEFORM GENERATION

We begin with a brief review of the Stein score function, Langevin dynamics, and score matching. The Stein score function (Hyvärinen, 2005) is the gradient of the data log-density log p(y) with respect to the datapoint y: s(y) = ∇ y log p(y). (1)



Figure 1: A visualization of the WaveGrad inference process. Starting from Gaussian noise (n = 0), gradient-based sampling is applied using as few as 6 iterations to achieve high fidelity audio (n = 6). Left: signal after each step of a gradient-based sampler. Right: zoomed view of a 50 ms segment.

to address conditional speech synthesis. • We build and compare two variants of the WaveGrad model: (1) WaveGrad conditioned on a discrete refinement step index following Ho et al. (2020), (2) WaveGrad conditioned on a continuous scalar indicating the noise level. We find this novel continuous variant is more effective, especially because once the model is trained, different number of refinement steps can be used for inference. The proposed continuous noise schedule enables our model to use fewer inference iterations while maintaining the same quality (e.g., 6 vs. 50). • We demonstrate that WaveGrad is capable of generating high fidelity audio samples, outperforming adversarial non-autoregressive models (Yamamoto et al., 2020; Kumar et al., 2019; Yang et al., 2020a; Bińkowski et al., 2020) and matching one of the best autoregressive models (Kalchbrenner et al., 2018) in terms of subjective naturalness. WaveGrad is capable of generating high fidelity samples using as few as six refinement steps.

