VAEBM: A SYMBIOSIS BETWEEN VARIATIONAL AU-TOENCODERS AND ENERGY-BASED MODELS

Abstract

Energy-based models (EBMs) have recently been successful in representing complex distributions of small images. However, sampling from them requires expensive Markov chain Monte Carlo (MCMC) iterations that mix slowly in high dimensional pixel space. Unlike EBMs, variational autoencoders (VAEs) generate samples quickly and are equipped with a latent space that enables fast traversal of the data manifold. However, VAEs tend to assign high probability density to regions in data space outside the actual data distribution and often fail at generating sharp images. In this paper, we propose VAEBM, a symbiotic composition of a VAE and an EBM that offers the best of both worlds. VAEBM captures the overall mode structure of the data distribution using a state-of-the-art VAE and it relies on its EBM component to explicitly exclude non-data-like regions from the model and refine the image samples. Moreover, the VAE component in VAEBM allows us to speed up MCMC updates by reparameterizing them in the VAE's latent space. Our experimental results show that VAEBM outperforms state-of-the-art VAEs and EBMs in generative quality on several benchmark image datasets by a large margin. It can generate high-quality images as large as 256×256 pixels with short MCMC chains. We also demonstrate that VAEBM provides complete mode coverage and performs well in out-of-distribution detection.

1. INTRODUCTION

Deep generative learning is a central problem in machine learning. It has found diverse applications, ranging from image (Brock et al., 2018; Karras et al., 2019; Razavi et al., 2019 ), music (Dhariwal et al., 2020 ) and speech (Ping et al., 2020; Oord et al., 2016a) generation, distribution alignment across domains (Zhu et al., 2017; Liu et al., 2017; Tzeng et al., 2017) and semi-supervised learning (Kingma et al., 2014; Izmailov et al., 2020) et al., 2006; Salakhutdinov et al., 2007) . These models are trained by maximizing the data likelihood under the model, and unlike generative adversarial networks (GANs) (Goodfellow et al., 2014) , their training is usually stable and they cover modes in data more faithfully by construction. Among likelihood-based models, EBMs model the unnormalized data density by assigning low energy to high-probability regions in the data space (Xie et al., 2016; Du & Mordatch, 2019) . EBMs are appealing because they require almost no restrictions on network architectures (unlike normalizing flows) and are therefore potentially very expressive. They also exhibit better robustness and out-of-distribution generalization (Du & Mordatch, 2019) because, during training, areas with high probability under the model but low probability under the data distribution are penalized explicitly. However, training and sampling EBMs usually requires MCMC, which can suffer from slow mode mixing and is computationally expensive when neural networks represent the energy function. * Work done during an internship at NVIDIA 1



to 3D point cloud generation (Yang et al., 2019), light-transport simulation (Müller et al., 2019), molecular modeling (Sanchez-Lengeling & Aspuru-Guzik, 2018; Noé et al., 2019) and equivariant sampling in theoretical physics (Kanwar et al., 2020). Among competing frameworks, likelihood-based models include variational autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014), normalizing flows (Rezende & Mohamed, 2015; Dinh et al., 2016), autoregressive models (Oord et al., 2016b), and energy-based models (EBMs) (Lecun

