STATISTICAL EFFICIENCY OF SCORE MATCHING: THE VIEW FROM ISOPERIMETRY

Abstract

Deep generative models parametrized up to a normalizing constant (e.g. energybased models) are difficult to train by maximizing the likelihood of the data because the likelihood and/or gradients thereof cannot be explicitly or efficiently written down. Score matching is a training method, whereby instead of fitting the likelihood log p(x) for the training data, we instead fit the score function ∇ x log p(x) -obviating the need to evaluate the partition function. Though this estimator is known to be consistent, its unclear whether (and when) its statistical efficiency is comparable to that of maximum likelihood -which is known to be (asymptotically) optimal. We initiate this line of inquiry in this paper, and show a tight connection between statistical efficiency of score matching and the isoperimetric properties of the distribution being estimated -i.e. the Poincaré, log-Sobolev and isoperimetric constant -quantities which govern the mixing time of Markov processes like Langevin dynamics. Roughly, we show that the score matching estimator is statistically comparable to the maximum likelihood when the distribution has a small isoperimetric constant. Conversely, if the distribution has a large isoperimetric constant -even for simple families of distributions like exponential families with rich enough sufficient statistics -score matching will be substantially less efficient than maximum likelihood. We suitably formalize these results both in the finite sample regime, and in the asymptotic regime. Finally, we identify a direct parallel in the discrete setting, where we connect the statistical properties of pseudolikelihood estimation with approximate tensorization of entropy and the Glauber dynamics.

1. INTRODUCTION

Energy-based models (EBMs) are deep generative models parametrized up to a constant of parametrization, namely p(x) ∝ exp(f (x)). The primary training challenge is the fact that evaluating the likelihood (and gradients thereof) requires evaluating the partition function of the model, which is generally computationally intractable -even when using relatively sophisticated MCMC techniques. The seminal paper of Song and Ermon (2019) circumvented this difficulty by instead fitting the score function of the model, that is ∇ x log p(x). Though not obvious how to evaluate this loss from training samples only, Hyvärinen (2005) showed this can be done via integration by parts, and the estimator is consistent (that is, converges to the correct value in the limit of infinite samples). The maximum likelihood estimator is the de-facto choice for model-fitting for its well-known property of being statistically optimal in the limit where the number of samples goes to infinity ( Van der Vaart, 2000) . It is unclear how much worse score matching can be -thus, it's unclear how much statistical efficiency we sacrifice for the algorithmic convenience of avoiding partition functions. In the seminal paper (Song and Ermon, 2019), it was conjectured that multimodality, as well as a lowdimensional manifold structure may cause difficulties for score matching. Though the intuition for this is natural: having poor estimates for the score in "low probability" regions of the distribution

