UNDERSTANDING OVERPARAMETERIZATION IN GENERATIVE ADVERSARIAL NETWORKS

Abstract

A broad class of unsupervised deep learning methods such as Generative Adversarial Networks (GANs) involve training of overparameterized models where the number of parameters of the model exceeds a certain threshold. Indeed, most successful GANs used in practice are trained using overparameterized generator and discriminator networks, both in terms of depth and width. A large body of work in supervised learning have shown the importance of model overparameterization in the convergence of the gradient descent (GD) to globally optimal solutions. In contrast, the unsupervised setting and GANs in particular involve non-convex concave mini-max optimization problems that are often trained using Gradient Descent/Ascent (GDA). The role and benefits of model overparameterization in the convergence of GDA to a global saddle point in non-convex concave problems is far less understood. In this work, we present a comprehensive analysis of the importance of model overparameterization in GANs both theoretically and empirically. We theoretically show that in an overparameterized GAN model with a 1-layer neural network generator and a linear discriminator, GDA converges to a global saddle point of the underlying non-convex concave min-max problem. To the best of our knowledge, this is the first result for global convergence of GDA in such settings. Our theory is based on a more general result that holds for a broader class of nonlinear generators and discriminators that obey certain assumptions (including deeper generators and random feature discriminators). Our theory utilizes and builds upon a novel connection with the convergence analysis of linear timevarying dynamical systems which may have broader implications for understanding the convergence behavior of GDA for non-convex concave problems involving overparameterized models. We also empirically study the role of model overparameterization in GANs using several large-scale experiments on CIFAR-10 and Celeb-A datasets. Our experiments show that overparameterization improves the quality of generated samples across various model architectures and datasets. Remarkably, we observe that overparameterization leads to faster and more stable convergence behavior of GDA across the board.

1. INTRODUCTION

In recent years, we have witnessed tremendous progress in deep generative modeling with some state-of-the-art models capable of generating photo-realistic images of objects and scenes (Brock et al., 2019; Karras et al., 2019; Clark et al., 2019) . Three prominent classes of deep generative models include GANs (Goodfellow et al., 2014) , VAEs (Kingma & Welling, 2014) and normalizing flows (Dinh et al., 2017) . Of these, GANs remain a popular choice for data synthesis especially in the image domain. GANs are based on a two player min-max game between a generator network that generates samples from a distribution, and a critic (discriminator) network that discriminates real distribution from the generated one. The networks are optimized using Gradient Descent/Ascent (GDA) to reach a saddle-point of the min-max optimization problem. One of the key factors that has contributed to the successful training of GANs is model overparameterization, defined based on the model parameters count. By increasing the complexity of discriminator and generator networks, both in depth and width, recent papers show that GANs can achieve photo-realistic image and video synthesis (Brock et al., 2019; Clark et al., 2019; Karras et al., 2019) . While these works empirically demonstrate some benefits of overparameterization, there is lack of a rigorous study explaining this phenomena. In this work, we attempt to provide a comprehensive understanding of the role of overparameterization in GANs, both theoretically and empirically. We note that while overparameterization is a key factor in training successful GANs, other factors such as generator and discriminator architectures, regularization functions and model hyperparameters have to be taken into account as well to improve the performance of GANs. Recently, there has been a large body of work in supervised learning (e.g. regression or classification problems) studying the importance of model overparameterization in gradient descent (GD)'s convergence to globally optimal solutions (Soltanolkotabi et al., 2018; Allen-Zhu et al., 2019; Du et al., 2019; Oymak & Soltanolkotabi, 2019; Zou & Gu, 2019; Oymak et al., 2019) . A key observation in these works is that, under some conditions, overparameterized models experience lazy training (Chizat et al., 2019) where optimal model parameters computed by GD remain close to a randomly initialized model. Thus, using a linear approximation of the model in the parameter space, one can show the global convergence of GD in such minimization problems. In contrast, training GANs often involves solving a non-convex concave min-max optimization problem that fundamentally differs from a single minimization problem of classification/regression. The key question is whether overparameterized GANs also experience lazy training in the sense that overparameterized generator and discriminator networks remain sufficiently close to their initializations. This may then lead to a general theory of global convergence of GDA for such overparameterized non-convex concave min-max problems. In this paper we first theoretically study the role of overparameterization for a GAN model with a 1-hidden layer generator and a linear discriminator. We study two optimization procedures to solve this problem: (i) using a conventional training procedure in GANs based on GDA in which generator and discriminator networks perform simultaneous steps of gradient descent to optimize their respective models, (ii) using GD to optimize generator's parameters for the optimal discriminator. The latter case corresponds to taking a sufficiently large number of gradient ascent steps to update discriminator's parameters for each GD step of the generator. In both cases, our results show that in an overparameterized regime, the GAN optimization converges to a global solution. To the best of our knowledge, this is the first result showing the global convergence of GDA in such settings. While in our results we focus on one-hidden layer generators and linear discriminators, our theory is based on analyzing a general class of min-max optimization problems which can be used to study a much broader class of generators and discriminators potentially including deep generators and deep random feature-based discriminators. A key component of our analysis is a novel connection to exponential stability of non-symmetric time varying dynamical systems in control theory which may have broader implications for theoretical analysis of GAN's training. Ideas from control theory have



Figure 1: Overparameterization in GANs. We train DCGAN models by varying the size of the hidden dimension k (larger the k, more overparameterized the models are, see Fig. 8 for details). Overparameterized GANs enjoy improved training and test FID scores (the left panel), generate high-quality samples (the middle panel) and have fast and stable convergence (the right panel).

