UNDERSTANDING OVERPARAMETERIZATION IN GENERATIVE ADVERSARIAL NETWORKS

Abstract

A broad class of unsupervised deep learning methods such as Generative Adversarial Networks (GANs) involve training of overparameterized models where the number of parameters of the model exceeds a certain threshold. Indeed, most successful GANs used in practice are trained using overparameterized generator and discriminator networks, both in terms of depth and width. A large body of work in supervised learning have shown the importance of model overparameterization in the convergence of the gradient descent (GD) to globally optimal solutions. In contrast, the unsupervised setting and GANs in particular involve non-convex concave mini-max optimization problems that are often trained using Gradient Descent/Ascent (GDA). The role and benefits of model overparameterization in the convergence of GDA to a global saddle point in non-convex concave problems is far less understood. In this work, we present a comprehensive analysis of the importance of model overparameterization in GANs both theoretically and empirically. We theoretically show that in an overparameterized GAN model with a 1-layer neural network generator and a linear discriminator, GDA converges to a global saddle point of the underlying non-convex concave min-max problem. To the best of our knowledge, this is the first result for global convergence of GDA in such settings. Our theory is based on a more general result that holds for a broader class of nonlinear generators and discriminators that obey certain assumptions (including deeper generators and random feature discriminators). Our theory utilizes and builds upon a novel connection with the convergence analysis of linear timevarying dynamical systems which may have broader implications for understanding the convergence behavior of GDA for non-convex concave problems involving overparameterized models. We also empirically study the role of model overparameterization in GANs using several large-scale experiments on CIFAR-10 and Celeb-A datasets. Our experiments show that overparameterization improves the quality of generated samples across various model architectures and datasets. Remarkably, we observe that overparameterization leads to faster and more stable convergence behavior of GDA across the board.

1. INTRODUCTION

In recent years, we have witnessed tremendous progress in deep generative modeling with some state-of-the-art models capable of generating photo-realistic images of objects and scenes (Brock et al., 2019; Karras et al., 2019; Clark et al., 2019) . Three prominent classes of deep generative models include GANs (Goodfellow et al., 2014) , VAEs (Kingma & Welling, 2014) and normalizing flows (Dinh et al., 2017) . Of these, GANs remain a popular choice for data synthesis especially in the image domain. GANs are based on a two player min-max game between a generator network that generates samples from a distribution, and a critic (discriminator) network that discriminates real distribution from the generated one. The networks are optimized using Gradient Descent/Ascent (GDA) to reach a saddle-point of the min-max optimization problem. One of the key factors that has contributed to the successful training of GANs is model overparameterization, defined based on the model parameters count. By increasing the complexity of discriminator and generator networks, both in depth and width, recent papers show that GANs can achieve photo-realistic image and video synthesis (Brock et al., 2019; Clark et al., 2019; Karras et al., 2019) . While these works empirically demonstrate some benefits of overparameterization, there is lack of a rigorous study explaining this phenomena. In this work, we attempt to provide a comprehensive understanding of the role of overparameterization in GANs, both theoretically and empirically. We note that while overparameterization is a key factor in training successful GANs, other factors such as generator and discriminator architectures, regularization functions and model hyperparameters have to be taken into account as well to improve the performance of GANs. Recently, there has been a large body of work in supervised learning (e.g. regression or classification problems) studying the importance of model overparameterization in gradient descent (GD)'s convergence to globally optimal solutions (Soltanolkotabi et al., 2018; Allen-Zhu et al., 2019; Du et al., 2019; Oymak & Soltanolkotabi, 2019; Zou & Gu, 2019; Oymak et al., 2019) . A key observation in these works is that, under some conditions, overparameterized models experience lazy training (Chizat et al., 2019) where optimal model parameters computed by GD remain close to a randomly initialized model. Thus, using a linear approximation of the model in the parameter space, one can show the global convergence of GD in such minimization problems. In contrast, training GANs often involves solving a non-convex concave min-max optimization problem that fundamentally differs from a single minimization problem of classification/regression. The key question is whether overparameterized GANs also experience lazy training in the sense that overparameterized generator and discriminator networks remain sufficiently close to their initializations. This may then lead to a general theory of global convergence of GDA for such overparameterized non-convex concave min-max problems. In this paper we first theoretically study the role of overparameterization for a GAN model with a 1-hidden layer generator and a linear discriminator. We study two optimization procedures to solve this problem: (i) using a conventional training procedure in GANs based on GDA in which generator and discriminator networks perform simultaneous steps of gradient descent to optimize their respective models, (ii) using GD to optimize generator's parameters for the optimal discriminator. The latter case corresponds to taking a sufficiently large number of gradient ascent steps to update discriminator's parameters for each GD step of the generator. In both cases, our results show that in an overparameterized regime, the GAN optimization converges to a global solution. To the best of our knowledge, this is the first result showing the global convergence of GDA in such settings. While in our results we focus on one-hidden layer generators and linear discriminators, our theory is based on analyzing a general class of min-max optimization problems which can be used to study a much broader class of generators and discriminators potentially including deep generators and deep random feature-based discriminators. A key component of our analysis is a novel connection to exponential stability of non-symmetric time varying dynamical systems in control theory which may have broader implications for theoretical analysis of GAN's training. Ideas from control theory have also been used for understanding and improving training dynamics of GANs in (Xu et al., 2019; An et al., 2018) . Having analyzed overparameterized GANs for relatively simple models, we next provide a comprehensive empirical study of this problem for practical GANs such as DCGAN (Radford et al., 2016) and ResNet GAN (Gulrajani et al., 2017) trained on CIFAR-10 and Celeb-A datasets. For example, the benefit of overparamterization in training DCGANs on CIFAR-10 is illustrated in Figure 1 . We have three key observations: (i) as the model becomes more overparameterized (e.g. using wider networks), the training FID scores that measure the training error, decrease. This phenomenon has been observed in other studies as well (Brock et al., 2019) . (ii) overparameterization does not hurt the test FID scores (i.e. the generalization gap remains small). This improved test-time performance can also be seen qualitatively in the center panel of Figure 1 , where overparameterized models produce samples of improved quality. (iii) Remarkably, overparameterized GANs, with a lot of parameters to optimize over, have significantly improved convergence behavior of GDA, both in terms of rate and stability, compared to small GAN models (see the right panel of Figure 1 ).

In summary, in this paper

• We provide the first theoretical guarantee of simultaneous GDA's global convergence for an overparameterized GAN with one-hidden neural network generator and a linear discriminator (Theorem 2.1). • By establishing connections with linear time-varying dynamical systems, we provide a theoretical framework to analyze simultaneous GDA's global convergence for a general overparameterized GAN (including deeper generators and random feature discriminators), under some general conditions (Theorems 2.3 and A.4). • We provide a comprehensive empirical study of the role of model overparameterization in GANs using several large-scale experiments on CIFAR-10 and Celeb-A datasets. We observe overparameterization improves GANs' training error, generalization error, sample qualities as well as the convergence rate and stability of GDA.

2.1. PROBLEM FORMULATION

Given n data points of the form x 1 , x 2 , . . . , x n ∈ R m , the goal of GAN's training is to find a generator that can mimic sampling from the same distribution as the training data. More specifically, the goal is to find a generator mapping G θ (z) : R d → R m , parameterized by θ ∈ R p , so that G θ (z 1 ), G θ (z 2 ), . . . , G θ (z n ) with z 1 , z 2 , . . . , z n generated i.i.d. according to N (0, I d ) has a similar empirical distribution to x 1 , x 2 , . . . , x nfoot_0 . To measure the discrepancy between the data points and the GAN outputs, one typically uses a discriminator mapping D θ : R m → R parameterized with θ ∈ R p . The overall training approach takes the form of the following min-max optimization problem which minimizes the worst-case discrepancy detected by the discriminator min θ max θ 1 n n i=1 D θ (x i ) - 1 n n i=1 D θ (G θ (z i )) + R( θ). Here, R( θ) is a regularizer that typically ensures the discriminator is Lipschitz. This formulation mimics the popular Wasserstein GAN (Arjovsky et al., 2017) (or, IPM GAN) formulations. This optimization problem is typically solved by running Gradient Descent Ascent (GDA) on the minimization/maximization variables. The generator and discriminator mappings G and D used in practice are often deep neural networks. Thus, the min-max optimization problem above is highly nonlinear and non-convex concave. Saddle point optimization is a classical and fundamental problem in game theory (Von Neumann & Morgenstern, 1953) and control (Gutman, 1979) . However, most of the classical results apply to the convex-concave case (Arrow et al., 1958) while the saddle point optimization of GANs is often non convex-concave. If GDA converges to the global (local) saddle points, we say it is globally (locally) stable. For a general min-max optimization, however, GDA can be trapped in a loop or even diverge. Except in some special cases (e.g. (Feizi et al., 2018) for a quadratic GAN formulation or (Lei et al., 2019) for the under-parametrized setup when the generator is a one-layer network), GDA is not globally stable for GANs in general (Nagarajan & Kolter, 2017; Mescheder et al., 2018; Adolphs et al., 2019; Mescheder et al., 2017; Daskalakis et al., 2020) . None of these works, however, study the role of model overparameterization in the global/local convergence (stability) of GDA. In particular, it has been empirically observed (as we also demonstrate in this paper) that when the generator/discriminator contain a large number of parameters (i.e. are sufficiently overparameterized) GDA does indeed find (near) globally optimal solutions. In this section we wish to demystify this phenomenon from a theoretical perspective.

2.2. DEFINITION OF MODEL OVERPARAMETERIZATION

In this paper, we use overparameterization in the context of model parameters count. Informally speaking, overparameterized models have large number of parameters, that is we assume that the number of model parameters is sufficiently large. In specific problem setups of Section 2, we precisely compute thresholds where the number of model parameters should exceed in order to observe nice convergence properties of GDA. Note that the definition of overparameterization based on model parameters count is related, but distinct from the complexity of the hypothesis class. For instance, in our empirical studies, when we say we overparameterize a neural network, we fix the number of layers in the neural network and increase the hidden dimensions. Our definition does not include the case where the number of layers also increases, which forms a different hypothesis class.

2.3. RESULTS FOR ONE-HIDDEN LAYER GENERATORS AND RANDOM DISCRIMINATORS

In this section, we discuss our main results on the convergence of gradient based algorithms when training GANs in the overparameterized regime. We focus on the case where the generator takes the form of a single hidden-layer ReLU network with d inputs, k hidden units, and m outputs. Specifically, G (z) = V • ReLU (W z) with W ∈ R k×d and V ∈ R m×k denoting the input-to-hidden and hidden-to-output weights. We also consider a linear discriminator of the form D(x) = d T x with an 2 regularizer on the weights i.e. R(d) = -d 2 2 /2. The overall min-max optimization problem (equation 1) takes the form min W ∈R k×d max d∈R m L(W , d) := d, 1 n n i=1 (x i -V ReLU (W z i )) - d 2 2 2 . ( ) Note that we initialize V at random and keep it fixed throughout the training. The common approach to solve the above optimization problem is to run a Gradient Descent Ascent (GDA) algorithm. At iteration t, GDA takes the form d t+1 = d t + µ∇ d L (W t , d t ) W t+1 = W t -η∇ W L (W t , d t ) Next, we establish the global convergence of GDA for an overparameterized model. Note that a global saddle point (W * , d * ) is defined as L(W * , d) ≤ L(W * , d * ) ≤ L(W , d * ) for all feasible W and d. If these inequalities hold in a local neighborhood, (W * , d * ) is called a local saddle point. Theorem 2.1 Let x 1 , x 2 , . . . , x n ∈ R m be n training data with their mean defined as x := 1 n n i=1 x i . Consider the GAN model with a linear discriminator of the form D(x) = d T x parameterized by d ∈ R m and a one hidden layer neural network generator of the form G(z) = V φ(W z) parameterized by W ∈ R k×d with V ∈ R m×k a fixed matrix generated at random with i.i.d. N (0, σfoot_1 v ) entries. Also assume the input data to the generator {z i } n i=1 are generated i.i.d. according to ∼ N (0, σ 2 z I d ). Furthermore, assume the generator weights at initialization W 0 ∈ R k×d are generated i.i.d. according to N (0, σ 2 w ). Furthermore, assume the standard deviations above obey σ v σ w σ z ≥ x 2 /(md 5 2 log d 3 2 ). Then, as long as k ≥ C • md 4 log (d) 3 with C a fixed constant, running GDA updates per equation 3 starting from the random W 0 above and d 0 = 0 2 with step-sizes obeying 0 < µ ≤ 1 and η = η µ 324•k• d+ n-1 π n •σ 2 v •σ 2 z , with η ≤ 1, satisfies 1 n n i=1 V ReLU (W τ z i ) -x 2 ≤ 5 1 -10 -5 • ηµ τ 1 n n i=1 V ReLU (W 0 z i ) -x 2 . ( ) This holds with probability at least 1 - (n + 5) e -m 1500 -5k • e -c1•n -(2k + 2) e -d 216 - ne -c2•md 3 log(d) 2 where c 1 , c 2 are fixed numerical constants. To better understand the implications of the above theorem, note that the objective of equation 2 can be simplified by solving the inner maximization in a closed form so that the min-max problem in equation 2 is equivalent to the following single minimization problem: min W L(W ) := 1 2 1 n n i=1 V ReLU (W z i ) -x 2 2 , which has a global optimum of zero. As a result equation 4 in Theorem 2.1 guarantees that running simultaneous GDA updates achieves the global optimum. This holds as long as the generator network is sufficiently overparameterized in the sense that the number of hidden nodes is polynomially large in its output dimension m and input dimension d. Interestingly, the rate of convergence guaranteed by this result is geometric, guaranteeing fast GDA convergence to the global optima. To the extent of our knowledge, this is the first result that establishes the global convergence of simultaneous GDA for an overparameterized GAN model. While the result proved above shows the global convergence of GDA for a GAN with 1-hidden layer generator and a linear discriminator, for a general GAN model, local saddle points may not even exist and GDA may converge to approximate local saddle points (Berard et al., 2020; Farnia & Ozdaglar, 2020) . For a general min-max problem, (Daskalakis et al., 2020) has recently shown that approximate local saddle points exist under some general conditions on the lipschitzness of the objective function. Understanding GDA dynamics for a general GAN remains an important open problem. Our result in Theorem 2.1 is a first and important step towards that. We acknowledge that the considered GAN formulation of equation 2 is very simpler than GANs used in practice. Specially, since the discriminator is linear, this GAN can be viewed as a momentmatching GAN (Li et al., 2017) pushing first moments of input and generative distributions towards each other. Alternatively, this GAN formulation can be viewed as one instance of the Sliced Wasserstein GAN (Deshpande et al., 2018) . Although the maximization on discriminator's parameters is concave, the minimization over the generator's parameters is still non-convex due to the use of a neural-net generator. Thus, the overall optimization problem is a non-trivial non-convex concave min-max problem. From that perspective, our result in Theorem 2.1 partially explains the role of model overparameterization in GDA's convergence for GANs. Given the closed form equation 5, one may wonder what would happen if we run gradient descent on this minimization objective directly. That is running gradient descent updates of the form W τ +1 = W τ -η∇L(W τ ) with L(W ) given by equation 5. This is equivalent to GDA but instead of running one gradient ascent iteration for the maximization iteration we run infinitely many. Interestingly, in some successful GAN implementations (Gulrajani et al., 2017) , often more updates on the discriminator's parameters are run per generator's updates. This is the subject of the next result. Theorem 2.2 Consider the setup of Theorem 2.1. Then as long as  k ≥ C • md 4 log (d) d+ n-1 π n •σ 2 v •σ 2 z , with η ≤ 1, satisfies 1 n n i=1 V ReLU (W τ z i ) -x 2 ≤ 1 -4 × 10 -6 • η τ 1 n n i=1 V ReLU (W 0 z i ) -x 2 . ( ) This holds with probability at least 1 - (n + 5) e -m 1500 -5k • e -c1•n -(2k + 2) e -d 216 - ne -c2•md 3 log (d) 2 with c 1 , c 2 fixed numerical constants. This theorem states that if we solve the max part of equation 2 in closed form and run GD on the loss function per equation 5 with enough overparameterization, the loss will decrease at a geometric rate to zero. This result holds again when the model is sufficiently overparameterized. The proof of Theorem 2.2 relies on a result from (Oymak & Soltanolkotabi, 2020) , which was developed in the framework of supervised learning. Also note that the amount of overparameterization required in both Theorems 2.1 and 2.2 is the same.

2.4. CAN THE ANALYSIS BE EXTENDED TO MORE GENERAL GANS?

In the previous section, we focused on the implications of our results for one-hidden layer generator and linear discriminator. However, as it will become clear in the proofs, our theoretical results are based on analyzing the convergence behavior of GDA on a more general min-max problem of the form min θ∈R p max d∈R m h(θ, d) := d, f (θ) -y - d 2 2 2 , where f : R p → R m denotes a general nonlinear mapping. Theorem 2.3 (Informal version of Theorem A.4) Consider a general nonlinear mapping f : R p → R m with the singular values of its Jacobian mapping around initialization obeying certain assumptions (most notably σ min (J (θ 0 )) ≥ α). Then, running GDA iterations of the form d t+1 = d t + µ∇ d h(θ t , d t ) θ t+1 = θ t -η∇ θ h(θ t , d t ) with sufficiently small step sizes η and µ obeys f (θ t ) -y 2 ≤ γ 1 - ηα 2 2 t f (θ 0 ) -y 2 2 + d 0 2 2 . Note that similar to the previous sections one can solve the maximization problem in equation 7 in closed form so that equation 7 is equivalent to the following minimization problem min θ∈R p L(θ) := 1 2 f (θ) -y with global optima equal to zero. Theorem 2.3 ensures that GDA converges with a fast geometric rate to this global optima. This holds as soon as the model f (θ) is sufficiently overparameterized which is quantitatively captured via the minimum singular value assumption on the Jacobian at initialization (σ min (J (θ 0 )) ≥ α which can only hold when m ≤ p). This general result can thus be used to provide theoretical guarantees for a much more general class of generators and discriminators. To be more specific, consider a deep GAN model where the generator G θ is a deep neural network with parameters θ and the discriminator is a deep random feature model of the form D d (x) = d T ψ(x) parameterized with d and ψ : R d → R m a deep neural network with random weights. Then the minmax training optimization problem equation 1 with a regularizer R(d) = -d 2 2 /2 is a special instance of equation 7 with f (θ) := 1 n n i=1 ψ(G θ (z i )) and y := 1 n n i=1 ψ(x i ) Therefore, the above result can in principle be used to rigorously analyze global convergence of GDA for an overparameterized GAN problem with a deep generator and a deep random feature discriminator model. However, characterizing the precise amount of overparameterization required for such a result to hold requires a precise analysis of the minimum singular value of the Jacobian of f (θ) at initialization as well as other singular value related conditions stated in Theorem A.4. We defer such a precise analysis to future works. Figure 3 : MLP Overparameterization on MNIST.

Numerical Validations:

Next, we numerically study the convergence of GAN model considered in Theorems 2.1 and 2.2 where the discriminator is a linear network while the generator is a one hidden layer neural net. In our experiments, we generate x i 's from an m-dimension Gaussian distribution with mean µ and an identity covariance matrix. The mean vector µ is randomly generated. We train two variants of GAN models using (1) GDA (as considered in Thm 2.1) and (2) GD on generator while solving the discriminator to optimality (as considered in Thm 2.2). In Fig. 2 , we plot the converged loss values of GAN models trained using both techniques (1) and (2) as the hidden dimension k of the generator is varied. The MSE loss between the true data mean and the data mean of generated samples is used as our evaluation metric. As this MSE loss approaches 0, the model converges to the global saddle point. We observe that overparameterized GAN models show improved convergence behavior than the narrower models. Additionally, the MSE loss converges to 0 for larger values of k which shows that with sufficient overparamterization, GDA converges to a global saddle point.

3. EXPERIMENTS

In this section, we demonstrate benefits of overparamterization in large GAN models. In particular, we train GANs on two benchmark datasets: CIFAR-10 (32 × 32 resolution) and Celeb-A (64 × 64 resolution). We use two commonly used GAN architectures: DCGAN and Resnet-based GAN. For both of these architectures, we train several models, each with a different number of filters in each layer, denoted by k. For simplicity, we refer to k as the hidden dimension. Appendix Fig. 8 illustrates the architectures used in our experiments. Networks with large k are more overparameterized. We use the same value of k for both generator and discriminator networks. This is in line with the design choice made in most recent GAN models (Radford et al., 2016; Brock et al., 2019) , where the size of generator and discriminator models are roughly maintained the same. We train each model till convergence and evaluate the performance of converged models using FID scores. FID scores measure the Frechet distance between feature distributions of real and generated data distributions (Heusel et al., 2017) . A small FID score indicates high-quality synthesized samples. Each experiment is conducted for 5 runs, and mean and the variance of FID scores are reported. Overparameterization yields better generative models: In Fig. 4 , we show the plot of FID scores as the hidden dimension (k) is varied for DCGAN and Resnet GAN models. We observe a clear trend where the FID scores are high (i.e. poor) for small values of k, while they improve as models become more overparameterized. Also, the FID scores saturate beyond k = 64 for DCGAN models, and k = 128 for Resnet GAN models. Interestingly, these are the standard values used in the existing model architecures (Radford et al., 2016; Gulrajani et al., 2017) .This trend is also consistent on MLP GANs trained on MNIST dataset (Fig. 3 ). We however notice that FID score in MLP GANs increase marginally as k increases from 1024 to 2048. This is potentially due to an increased generalization gap in this regime where it offsets potential benefits of over-parameterization Overparameterization leads to improved convergence of GDA: In Fig. 5 , we show the plot of FID scores over training iterations for different values for k. We observe that models with larger values of k converge faster and demonstrate a more stable behavior. This agrees with our theoretical results that overparameterized models have a fast rate of convergence.

Generalization gap in GANs:

To study the generalization gap, we compute the FID scores by using (1) the training-set of real data, which we call FID train, and (2) a held-out validation set of real data, which we call FID test. In Fig. 4 , a plot of FID train (in blue) and FID test (in green) are shown as the hidden dimension k is varied. We observe that FID test values are consistently higher than the the FID train values. Their gap does not increase with increasing overparameterization. However, as explained in (Gulrajani et al., 2019) , the FID score has the issue of assigning low values to memorized samples. To alleviate the issue, (Gulrajani et al., 2019; Arora et al., 2017) proposed Neural Net Divergence (NND) to measure generalization in GANs. In Fig. 6 , we plot NND scores by varying the hidden dimensions in DCGAN and Resnet GAN trained on CIFAR-10 dataset. We observe that increasing the value of k decreases the NND score. Interestingly, the NND score of memorized samples are higher than most of the GAN models. This indicates that overparameterized models have not been memorizing training samples and produce better generative models.

4. CONCLUSION

In this paper, we perform a systematic study of the importance of overparameterization in training GANs. We first analyze a GAN model with one-hidden layer generator and a linear discriminator optimized using Gradient Descent Ascent (GDA). Under this setup, we prove that with sufficient overparameterization, GDA converges to a global saddle point. Additionally, our result demonstrate that overparameterized models have a fast rate of convergence. We then validate our theoretical findings through extensive experiments on DCGAN and Resnet models trained on CIFAR-10 and Celeb-A datasets. We observe overparameterized models to perform well both in terms of the rate of convergence and the quality of generated samples.

5. ACKNOWLEDGEMENT

M. Sajedi would like to thank Sarah Dean for introducing (Rugh, 1996) . 

A PROOFS

In this section, we prove Theorems 2.1 and 2.2. First, we provide some notations we use throughout the remainder of the paper in Section A.1. Before proving these specialized results for one hidden layer generators and linear discriminators (Theorems 2.1 and 2.2), we state and prove a more general result (formal version of Theorem 2.3) on the convergence of GDA on a general class of min-max problems in Section A.3. Then we state a few preliminary calculations in Section A.4. Next, we state some key lemmas in Section A.5 and defer their proofs to Appendix B. Finally, we prove Theorems 2.1 and 2.2 in Sections A.6 and A.7, respectively. A.1 NOTATION We will use C, c, c 1 , etc. to denote positive absolute constants, whose value may change throughout the paper and from line to line. We use φ (z) = ReLU (z) = max (0, z) and its (generalized) derivative φ (z) = 1 {z≥0} with 1 being the indicator function. σ min (X) and σ max (X) = X denote the minimum and maximum singular values of matrix X. For two arbitrary matrices A and B, A ⊗ B denotes their kronecker product. The spectral radius of a matrix A ∈ C n×n is defined as ρ (A) = max {|λ 1 |, . . . , |λ n |}, where λ i 's are the eigenvalues of A. Throughout the proof we shall assume φ := ReLU to avoid unnecessarily long expressions.

A.2 PROOF SKETCH OF THE MAIN RESULTS

In this section, we provide a brief overview of our proofs. We focus on the main result in this manuscript, which is about the convergence of GDA (Theorem 2.1). To do this we study the converge of GDA on the more general min-max problem of the form (see Theorem A.4 for a formal statement) min θ∈R n max d∈R m h(θ, d) := d, f (θ) -y - d 2 2 2 . ( ) In this case the GDA iterates take the form d t+1 = (1 -µ) d t + µ (f (θ t ) -y) θ t+1 = θ t -ηJ T (θ t ) d t . ( ) Our proof for global convergence of GDA on this min-max loss consists of the following steps. Step 1: Recasting the GDA updates as a linear time-varying system In the first step we carry out a series of algebraic manipulations to recast the GDA updates (equation 11) into the following form r t+1 d t+1 = A t r t d t , where r t = f (θ t ) -y denotes the residuum and A t denotes a properly defined transition matrix. Step 2: Approximation by a linear time-invariant system Next, to analyze the behavior of the time-varying dynamical system above we approximate it by the following time-invariant linear dynamical system r t+1 d t+1 = I -ηJ T (θ 0 ) J (θ 0 ) µI (1 -µ) I r t d t , where θ 0 denotes the initialization. The validity of this approximation is ensured by our assumptions on the Jacobian of the function f , which, among others, guarantee that it does not change too much in a sufficiently large neighborhood around the initialization and that the smallest singular value of J (θ 0 ) is bounded from below. Step 3: Analysis of time-invariant linear dynamical system To analyze the time-invariant dynamical system above, we utilize and refine intricate arguments from the control theory literature involving the spectral radius of the fixed transition matrix above to obtain r t d t 2 1 -ηα 2 t r 0 d 0 2 . Step 4: Completing the proof via a perturbation argument In the last step of our proof we show that the two sequences r t d t and r t d t will remain close to each other. This is based on a novel perturbation argument. The latter combined with Step 3 allows us to conclude r t d t 2 1 - ηα 2 2 t r 0 d 0 2 , which finishes the global convergence of GDA on equation 10 and hence the proof of Theorem A.4. In order to deduce Theorem 2.1 from Theorem A.4, we need to check that the Jacobian at the initialization is bounded from below at the origin and that it does not change too quickly in a large enough neighborhood. In order to prove that we will leverage recent ideas from the deep learning theory literature revolving around the neural tangent kernel. This allows us to guarantee that this conditions are indeed met, if the neural network is sufficiently wide and the initialization is chosen large enough. The second main result of this manuscript, Theorem 2.2, can be deduced more directly from recent results on overparameterized learning (see Oymak & Soltanolkotabi (2020) ). Hence, we have deferred its proof to Section A.7.

A.3 ANALYSIS OF GDA: A CONTROL THEORY PERSPECTIVE

In this section we will focus on solving a general min-max optimization problem of the form min θ∈R n max d∈R m h(θ, d) := d, f (θ) -y - d 2 2 2 , ( ) where f : R n → R m is a general nonlinear mapping. In particular, we focus on analyzing the convergence behavior of Gradient Descent/Ascent (GDA) on the above loss, starting from initial estimates θ 0 and d 0 . In this case the GDA updates take the following form d t+1 = (1 -µ) d t + µ (f (θ t ) -y) θ t+1 = θ t -ηJ T (θ t ) d t . We note that solving the inner maximization problem in equation 12 would yield min θ∈R n 1 2 f (θ) -y 2 2 . ( ) In this section, our goal is to show that when running the GDA updates of equation 13, the norm of the residual vector defined as r t := f (θ t ) -y goes to zero and hence we reach a global optimum of equation 14 (and in turn equation 12). Our proof will build on ideas from control theory and dynamical systems literature. For that, we are first going to rewrite the equations 13 in a more convenient way. We define the average Jacobian along the path connecting two points x,y ∈ R n as J (y, x) = 1 0 J (x + α (y -x)) dα, where J (θ) ∈ R m×n is the Jacobian associated with the nonlinear mapping f . Next, from the fundamental theorem of calculus it follows that r t+1 = f (θ t+1 ) -y = f θ t -ηJ T t d t -y = f (θ t ) -ηJ t+1,t J T t d t -y = r t -ηJ t+1,t J T t d t , where we used the shorthands J t := J (θ t ) and J t+1,t := J (θ t+1 , θ t ) for exposition purposes. Next, we combine the updates r t and d t into a state vector of the form z t := r t d t ∈ R 2m . Using this notation the relationship between the state vectors from one iteration to the next takes the form z t+1 = I -ηJ t+1,t J T t µI (1 -µ) I =:At z t , t ≥ 0, which resembles a time-varying linear dynamical system with transition matrix A t . Now note that to show convergence of r t to zero it suffices to show convergence of z t to zero. To do this we utilize the following notion of uniform exponential stability, which will be crucial in analyzing the solutions of equation 16. (See Rugh (1996) for a comprehensive overview on stability notions in discrete state equations.) Definition 1 A linear state equation of the form z t+1 = A t z t is called uniformly exponentially stable if for every t ≥ 0 we have z t 2 ≤ γλ t z 0 2 , where γ ≥ 1 is a finite constant and 0 ≤ λ < 1. Using the above definition to show the convergence of the state vector z t to zero at a geometric rate it suffices to show the state equation 16 is exponentially stable. 3 For that, we are first going to analyze a state equation which results from linearizing the nonlinear function f (θ) around the initialization θ 0 . In the next step, we are going to show that the behavior of these two problems are similar, provided we stay close to initialization (which we are also going to prove). Specifically, we consider the linearized problem min θ∈R n max d∈R m h lin θ, d := d, f (θ 0 ) + J 0 θ -θ 0 -y - d 2 2 2 . ( ) We first analyze GDA on this linearized problem starting from the same initialization as the original problem, i.e. θ 0 = θ 0 and d 0 = d 0 . The gradient descent update for θ t takes the form θ t+1 = θ t -ηJ T 0 d t , and the gradient ascent update for d t takes the form d t+1 = d t + µ f (θ 0 ) + J 0 θ t -θ 0 -y -d t = (1 -µ) d t + µ r t , where we used the linear residual defined as r t = f (θ 0 )+J 0 θ t -θ 0 -y. Moreover, the residual from one iterate to the next can be written as follows r t+1 = f (θ 0 ) + J 0 θ t+1 -θ 0 -y = f (θ 0 ) + J 0 θ t -ηJ T 0 d t -θ 0 -y = r t -ηJ 0 J T 0 d t . Again, we define a new vector z t = r t d t ∈ R 2m and by putting together equations 19 and 20 we arrive at z t+1 = I -ηJ 0 J T 0 µI (1 -µ) I z t = A z t , t ≥ 0, which is of the form of a linear time-invariant state equation. As a first step in our proof, we are going to show that the linearized state equations are uniformly exponentially stable. First, recall the following well-known lemma, which characterizes uniform exponential stability in terms of the eigenvalues of A. Lemma A.1 (Rugh, 1996, Theorem 22.11) A linear state equation of the form z t+1 = A z t with A a fixed matrix is uniformly exponentially stable if and only if all eigenvalues of A have magnitudes strictly less than one, i.e. ρ (A) < 1. In this case, it holds for all t ≥ 0 and all z that A t z ≤ γρ (A) t z , where γ ≥ 1 is an absolute constant, which only depends on A. In the next lemma, we prove that under suitable assumptions on J 0 and the step sizes µ and η the state equations 21 indeed fulfill this condition. Lemma A.2 Assume that α ≤ σ min (J 0 ) ≤ σ max (J 0 ) ≤ β and consider the matrix A = I -ηJ 0 J T 0 µI (1 -µ) I . Suppose that µ η ≥ 4β 2 . Then it holds that ρ (A) ≤ 1 -ηα 2 . Proof Suppose that λ is an eigenvalue of A. Hence, there is an eigenvector [x, y] T = 0 such that I -ηJ 0 J T 0 µI (1 -µ) I x y = λ x y holds. By a direct calculation we observe that this yields the equation ηJ 0 J T 0 x = -(1 -λ) 2 µ + (1 -λ) x. In particular, x must be an eigenvector of J 0 J T 0 . Denoting the corresponding eigenvalue with s, we obtain the identity (1 -λ) 2 µ -(1 -λ) + ηs = 0. Hence, we must have λ ∈ 1 - µ 2 + µ 2 4 -µηs; 1 - µ 2 - µ 2 4 -µηs . Note that the square root is indeed well-defined, since µ 2 4 -µηs ≥ µηβ 2 -µηs ≥ 0, where in the first inequality we used the assumption µ η ≥ 4β 2 and in the second line we used that s ≤ β 2 , which is a consequence of our assumption on the singular values of J 0 . Hence, it follows by the reverse triangle inequality that |λ| -1 - µ 2 ≤ λ -1 - µ 2 = µ 2 2 -µηs < µ 2 -ηs ≤ µ 2 -ηα 2 , where the second inequality is valid as µ 2 -ηs ≥ 0 is implied by µ 2 ≥ 2ηβ 2 > ηs. In the last inequality we used the fact that α 2 ≤ s, which is a consequence of our assumption on the singular values of J 0 . By rearranging terms, we obtain that |λ| < 1 -ηα 2 . Since λ was an arbitrary eigenvalue of A, the result follows. Since the last lemma shows that under suitable conditions it holds that ρ (A) < 1, Lemma A.3 yields uniform exponential stability of our state equations. However, this will not be sufficient for our purposes. The reason is that Lemma A.3 does not specify the constant γ and that in order to deal with the time-varying dynamical system we will need a precise estimate. The next lemma shows that for the state equations 21 we have, under suitable assumptions, γ ≤ 5. Lemma A.3 Consider the linear, time invariant system of equations z t+1 = I -ηJ 0 J T 0 µI (1 -µ) I z t = A z t , t ≥ 0. Furthermore, assume that α ≤ σ min (J 0 ) ≤ σ max (J 0 ) ≤ β and suppose that the condition µ η ≥ 8β 2 is satisfied. Then there is a constant γ ≤ 5 such that for all t ≥ 0 it holds that z t 2 ≤ γ 1 -ηα 2 t z 0 2 . Proof Denote the SVD decomposition of J 0 by W ΣV T and note that I -ηJ 0 J T 0 µI (1 -µ) I = W 0 0 W I -ηΣΣ T µI (1 -µ) I W T 0 0 W T . This means we can write I -ηJ 0 J T 0 µI (1 -µ) I = W 0 0 W P      C 1 0 . . . 0 C m      P T W T 0 0 W T , where P is a permutation matrix and the matrices C i are of the form C i = 1 -ησ 2 i µ (1 -µ) , for 1 ≤ i ≤ m, where the σ i 's denote the singular values of J 0 . Using this decomposition we can deduce z t 2 = A t z 0 2 ≤ A t z 0 2 = max 1≤i≤m C t i z 0 2 . Now suppose that V i D i V -1 i is the eigenvalue decomposition of C i , where the columns of V i contain the eigenvectors and D i is a diagonal matrix consisting of the eigenvalues. (Note that it follows from our assumptions on µ and η that the matrix C i is diagonalizable.) We have C t i = V i D t i V -1 i ≤ V i D t i V -1 i = κ i • ρ (C i ) t , where we defined κ i := V i V -1 i . From Lemma A.2 we know that the assumption µ η ≥ 4β 2 results in ρ (A) ≤ 1 -ηα 2 . Therefore, defining γ := max 1≤i≤m κ i and noting ρ (A) = max 1≤i≤m ρ (C i ), we obtain that z t 2 ≤ max 1≤i≤m C t i z 0 2 ≤ γ 1 -ηα 2 t z 0 2 . In order to finish the proof we need to show that γ ≤ 5. For that, note that calculating the eigenvectors of C i directly reveals that we can represent this matrix as V i =   1+ 1-4 ησ 2 i µ 2 1-1-4 ησ 2 i µ 2 1 1   . Since V i = λ max V i V T i and V -1 i = λ min V i V T i , we calculate V i V T i , which yields V i V T i = 1 -2 ησ 2 i µ 1 1 2 . Published as a conference paper at ICLR 2021 This representation allows us to directly calculate the two eigenvalues of V i V T i , which shows that κ i = λ max V i V T i λ min V i V T i = 3 -2 ησ 2 i µ + 1 + 2 ησ 2 i µ 2 + 4 3 -2 ησ 2 i µ - 1 + 2 ησ 2 i µ 2 + 4 = 3 -2 ησ 2 i µ + 1 + 2 ησ 2 i µ 2 + 4 2 1 -4 ησ 2 i µ ≤ 6 2 1 -4 ησ 2 i µ < 5, where the last inequality holds because of ησ 2 i µ ≤ ηβ 2 µ ≤ 1 8 . Since γ = max 1≤i≤m κ i , this finishes the proof. Now that we have shown that the linearized iterates converge to the global optima we turn our attention to showing that the nonlinear iterates 16 are close to its linear counterpart 21. For that, we make the following assumptions. Assumption 1: The singular values of the Jacobian at initialization are bounded from below σ min (J (θ 0 )) ≥ α for some positive constants α and β. Assumption 2: In a neighborhood with radius R around the initialization, the Jacobian mapping associated with f obeys J (θ) ≤ β for all θ ∈ B R (θ 0 ), where B R (θ 0 ) := {θ ∈ R p : θ -θ 0 2 ≤ R}. Assumption 3: In a neighborhood with radius R around the initialization, the spectral norm of the Jacobian varies no more than in the sense that J (θ) -J (θ 0 ) ≤ for all θ ∈ B R (θ 0 ). With these assumptions in place, we are ready to state the main theorem. Theorem A.4 Consider the GDA updates for the min-max optimization problem 12 d t+1 θ t+1 = d t + µ∇ d h(θ t , d t ) θ t -η∇ θ h(θ t , d t ) (22) and consider the GDA updates of the linearized problem 21 d t+1 θ t+1 = d t + µ∇ d h lin ( θ t , d t ) θ t -η∇ θ h lin ( θ t , d t ) . ( ) Set z t := r t d t and z t := r t d t , where r t := f (θ t ) -y and r t = f (θ 0 ) + J 0 θ t -θ 0 -y denote the residuals. Assume that the step sizes of the gradient descent ascent updates satisfy µ η ≥ 8β 2 as well as 0 < µ ≤ 1. Moreover, assume that the assumptions 1-3 hold for the Jacobian J (θ) of f (θ) around the initialization θ 0 ∈ R n with parameters α, β, , and R := 2γ β 2 α 2 J † 0 0 0 J † 0 z 0 2 + 18 β 2 γ 2 α 4 z 0 2 , which satisfy 4γβ ≤ α 2 . Here, 1 ≤ γ ≤ 5 is a constant, which only depends on µ, η, and J 0 . By J † 0 we denote the pseudo-inverse of the Jacobian at initialization J 0 . Then, assuming the same initialization θ 0 = θ 0 , d 0 = d 0 (and, hence, z 0 = z 0 ), the following holds for all iterations t ≥ 0. • z t 2 converges to 0 with a geometric rate, i.e. z t 2 ≤ γ 1 - ηα 2 2 t z 0 2 . ( ) • The trajectories of z t and z t stay close to each other and converge to the same limit, i.e. z t -z t 2 ≤ 2ηγ 2 β • t 1 - ηα 2 2 t-1 z 0 2 ≤ 4γ 2 β e 15 ln 16 15 α 2 z 0 2 . ( ) • The parameters of the original and linearized problems stay close to each other, i.e. θ t -θ t 2 ≤ 9 β 2 γ 2 α 4 z 0 2 , • The parameters of the original problem stay close to the initialization, i.e. θ t -θ 0 2 ≤ R 2 . ( ) Theorem A.4 will be the main ingredient in the proof of Theorem 2.1. However, as discussed in Section 2.4 we believe that this meta theorem can be used to deal with a much richer class of generators and discriminators.

A.3.1 PROOF OF THEOREM A.4

We will prove the statements in the theorem by induction. The base case for τ = 0 is trivial. Now assume that the equations equation 25 to equation 28 hold for τ = 0, . . . , t -1. Our goal is to show that they hold for iteration t as well. Part I: First, we are going to show that θ t ∈ B R (θ 0 ). Note that by the triangle inequality and the induction assumption we have that θ t -θ 0 2 ≤ θ t -θ t-1 2 + θ t-1 -θ 0 2 ≤ θ t -θ t-1 2 + R 2 . Hence, in order to prove the claim it remains to show that θ t -θ t-1 2 ≤ R 2 . For that, we compute 1 η θ t -θ t-1 2 = J T (θ t-1 ) d t-1 2 ≤ J T (θ t-1 ) d t-1 2 + J T (θ t-1 ) d t-1 -d t-1 2 ≤ J T 0 d t-1 2 + J (θ t-1 ) -J 0 d t-1 2 + J T (θ t-1 ) d t-1 -d t-1 2 (i) ≤ γ J T 0 0 0 J T 0 z 0 2 + • γ z 0 2 + 4β 2 γ 2 e 15 ln 16 15 α 2 z 0 2 (ii) ≤ γβ 2 J † 0 0 0 J † 0 z 0 2 + 3β 2 γ 2 α 2 z 0 2 , where γ ≤ 5 is a constant. Let us verify the last two inequalities. Inequality (ii) holds because 1 ≤ γ, 1 ≤ β 2 α 2 , and J T 0 0 0 J T 0 z 0 2 = V Σ T W T 0 0 V Σ T W T z 0 2 = n i=1 σ 2 i w i , r 0 2 + w i , d 0 2 ≤ β 2 n i=1 1 σ 2 i w i , r 0 2 + w i , d 0 2 = β 2 J † 0 0 0 J † 0 z 0 2 . (29) Also (i) follows from assumptions 1-3, d t-1 -d t-1 2 ≤ z t-1 -z t-1 2 together with induc- tion assumption equation 26, d t-1 2 ≤ z t-1 2 ≤ z 0 2 , and J T 0 d t-1 2 ≤ J T 0 r t-1 J T 0 d t-1 2 = I -ηJ T 0 J 0 µI (1 -µ) I J T 0 r t-2 J T 0 d t-2 2 = I -ηJ T 0 J 0 µI (1 -µ) I t-1 J T 0 r 0 J T 0 d 0 2 ≤ γ 1 -ηα 2 t-1 J T 0 0 0 J T 0 z 0 2 , ( ) where in the last inequality we applied Lemma A.3. Finally, by using η ≤ 1 8β 2 we arrive at θ t -θ t-1 2 ≤ γηβ 2 J † 0 0 0 J † 0 z 0 2 + 3ηβ 2 γ 2 α 2 z 0 2 ≤ γ 8 J † 0 0 0 J † 0 z 0 2 + 3 γ 2 8α 2 z 0 2 ≤ R 2 , where the last line is directly due to inequality (24), γ ≤ 5, and α ≤ β. Hence, we have established θ t ∈ B R (θ 0 ). Part II: In Lemma A.3 we showed that the time invariant system of state equations z t+1 = A z t is uniformly exponentially stable, i.e. z t 2 goes down to zero exponentially fast. Now by using the assumption that the Jacobian remains close to the Jacobian at the initialization J 0 , we aim to show the exponential stability of the time variant system of the state equations 16. For that, we compute z t = A t-1 z t-1 = I -ηJ t,t-1 J T t-1 µI (1 -µ) I z t-1 = I -ηJ 0 J T 0 µI (1 -µ) I z t-1 + η J 0 J T 0 -J t,t-1 J T t-1 d t-1 0 =: Az t-1 + ∆ t-1 . Now set λ := 1 -ηα 2 .

By induction, we obtain the relation z

t = A t z 0 + t-1 i=0 A t-1-i ∆ i . Hence, z t 2 = A t z 0 + t-1 i=0 A t-1-i ∆ i 2 ≤ A t z 0 2 + t-1 i=0 A t-1-i ∆ i 2 ≤ γλ t z 0 2 + t-1 i=0 γλ t-1-i η J 0 J T 0 -J i+1,i J T i d i 2 ≤ γλ t z 0 2 + t-1 i=0 ηγλ t-1-i (2β ) z i 2 . ( ) The second inequality holds because of Lemma A.3. The last inequality holds because by combining our assumptions 1 to 3 with θ t ∈ B R (θ 0 ) and the induction assumption 28 for 0 ≤ i ≤ t -1, we have that J 0 J T 0 -J i+1,i J T i = J 0 J T 0 -J 0 J T i + J 0 J T i -J i+1,i J T i ≤ J 0 J 0 -J i + J 0 -J i+1,i J i ≤ β J 0 -J i + β J 0 -J i+1,i ≤ 2β . In order to deal with inequality 31, we will rely on the following lemma. Lemma A.5 (Rugh, 1996, Lemma 24.5) Consider two real sequences p (t) and φ (t), where p (t) ≥ 0 for all t ≥ 0 and φ (t) ≤      ψ, if t = 0 ψ + η t-1 i=0 p (i) φ (i) , if t ≥ 1 where η and ψ are constants with η ≥ 0. Then for all t ≥ 1 we have φ (t) ≤ ψ t-1 i=0 (1 + η • p (i)) . Now we define φ t = zt 2 λ t and rewrite inequality 31 as φ t ≤ γφ 0 + t-1 i=0 2ηγβ λ φ i . Hence, Lemma A.5 yields that φ t ≤ γφ 0 t-1 i=0 1 + 2ηγβ λ = γφ 0 1 + 2ηγβ λ t (i) ≤ γφ 0 1 + ηα 2 2λ t (ii) = γφ 0 1 -ηα 2 2 1 -ηα 2 t , where (i) follows from 4γβ ≤ α 2 and (ii) holds by inserting λ = 1 -ηα 2 . Inserting the definition of φ 0 and φ t we obtain that z t 2 ≤ γ 1 - ηα 2 2 t z 0 2 . This completes the proof of Part II. Part III: In this part, our aim is to show that the error vector e t := z t -z t obeys inequality 26. First, note that e t = z t -z t ( * ) = (Az t-1 + ∆ t-1 ) -A z t-1 = Ae t-1 + ∆ t-1 , where in ( * ) we used the same notation as in Part II for ∆ t-1 . Using a recursive argument as well as e 0 = 0 we obtain that e t 2 = t-1 i=0 A t-1-i ∆ i 2 ≤ t-1 i=0 ηγ 1 -ηα 2 t-1-i ∆ i 2 (i) ≤ t-1 i=0 ηγ 1 -ηα 2 t-1-i J 0 J T 0 -J i+1,i J T i d i 2 (ii) ≤ t-1 i=0 2ηβ γ 1 -ηα 2 t-1-i z i 2 . The first inequality follows from the triangle inequality and Lemma A.3. Inequality (i) follows from the definition of ∆ i . Inequality (ii) follows from inequality 32. Setting c := 2ηβ we continue e t 2 ≤ t-1 i=0 cγ 1 -ηα 2 t-i-1 z i 2 (iii) ≤ t-1 i=0 cγ 2 1 -ηα 2 t-i-1 1 - ηα 2 2 i z 0 2 (iv) ≤ t-1 i=0 cγ 2 1 - ηα 2 2 t-1 z 0 2 = 2ηγ 2 β • t 1 - ηα 2 2 t-1 z 0 2 . Here (iii) holds because of our induction hypothesis 25 and (iv) follows simply from 1 -ηα 2 ≤ 1 -ηα 2 2 . This shows the first part of equation 26 for iteration t. Finally, to derive the second part of equation 26 we observe that for all t ≥ 0 and 0 < x ≤ 1 16 we have t (1 -x) Part IV: In this part, we aim to show that the parameters of the original and linearized problems are close. For that, we compute that 1 η θ t -θ t 2 = t-1 i=0 ∇ θ h (θ i , d i ) -∇ θ h lin (θ i , d i ) 2 = t-1 i=0 J T (θ i ) d i -J T 0 d i 2 ≤ t-1 i=0 J T (θ i ) -J T 0 d i 2 + t-1 i=0 J T (θ i ) d i -d i 2 (i) ≤ t-1 i=0 z i 2 + β t-1 i=0 e i 2 (ii) ≤ γ t-1 i=0 1 -ηα 2 i z 0 2 + 2ηγ 2 β 2 t-1 i=0 i 1 - ηα 2 2 i-1 z 0 2 . Here (i) follows from assumptions 2 and 3, and (ii) holds because of Lemma A.3 and our induction hypothesis 26. Hence, using the formula t i=0 ix i = x(1+tx t+1 -(t+1)x t ) (x-1) 2 we obtain that 1 η θ t -θ t 2 ≤ γ z 0 2    1 -1 -ηα 2 t ηα 2 + 2ηβ 2 γ 1 -t 1 -ηα 2 2 t-1 + (t -1) 1 -ηα 2 2 t ηα 2 2 2    ≤ γ z 0 2    1 ηα 2 + 2ηβ 2 γ 1 ηα 2 2 2    (iii) ≤ γ z 0 2 β 2 γ ηα 4 + 8β 2 γ ηα 4 = 9 β 2 γ 2 ηα 4 z 0 2 , where (iii) holds due to 1 ≤ γ and 1 ≤ β 2 α 2 . Hence, we have established inequality 27 for iteration t. Part V: In this part, we are going to prove equation 28 for iteration t. First, it follows from the triangle inequality that θ t -θ 0 2 ≤ θ t -θ 0 2 + θ t -θ t 2 ≤ θ t -θ 0 2 + 9 β 2 γ 2 α 4 z 0 2 , where in the second inequality we have used Part IV. Now we bound θ t -θ 0 2 from above as follows θ t -θ 0 2 = η t-1 i=0 J T 0 d i 2 ≤ η t-1 i=0 J T 0 d i 2 (i) ≤ ηγ t-1 i=0 1 -ηα 2 i J T 0 0 0 J T 0 z 0 2 = ηγ 1 -1 -ηα 2 t ηα 2 J T 0 0 0 J T 0 z 0 2 (ii) ≤ γ β 2 α 2 J † 0 0 0 J † 0 z 0 2 , where (i) holds by equation 30 and (ii) holds by equation 29. Hence, it follows from the definition of R (equation 24) that θ t -θ 0 2 ≤ γ β 2 α 2 J † 0 0 0 J † 0 z 0 2 + 9 β 2 γ 2 α 4 z 0 2 = R 2 . This completes the proof.

A.4 PRELIMINARIES FOR PROOFS OF RESULTS WITH ONE-HIDDEN LAYER GENERATOR AND LINEAR DISCRIMINATOR

In this section, we gather some preliminary results that will be useful in proving the main results i.e. Theorems 2.1 and 2.2. We begin by noting that Theorem 2.1 is an instance of Theorem A.4 with f (W ) = 1 n n i=1 V • φ (W z i ). We thus begin this section by noting that f (W ) can be rewritten as follows f (W ) = V •      1 n n i=1 φ w T 1 z i . . . 1 n n i=1 φ w T k z i      . Furthermore, the Jacobian of this mapping f (W ) takes the form J (W ) = 1 n n i=1 (V • diag (φ (W z i ))) ⊗ z T i . To characterize the spectral properties of this Jacobian it will be convenient to write down the expression for J (W )J (W ) T which has a compact form J (W )J (W ) T (i) = 1 n 2 n i,j=1 (V • diag (φ (W z i ))) ⊗ z T i diag (φ (W z j )) V T ⊗ z j (ii) = 1 n 2 n i,j=1 V diag (φ (W z i )) diag (φ (W z j )) V T ⊗ z T i z j = 1 n 2 V diag =1,...,k   n i=1 z i φ w T z i 2 2   V T = 1 n 2 V • D 2 • V T , where D is a diagonal matrix with entries D = n i=1 z i φ (w T z i ) 2 = Z T φ (Zw ) 2 , and Z ∈ R n×d contains the z i 's in its rows. Note that we used simple properties of kronecker product in (i) and (ii), namely (A ⊗ B) T = A T ⊗ B T and (A ⊗ B)(C ⊗ D) = (AC) ⊗ (BD). The next lemma establishes concentration of the diagonal entries of matrix D 2 around their mean, which will be used in the future lemmas regarding the spectrum of the Jacobian mapping. The proof is deferred to Appendix B.1. Lemma A.6 Suppose w ∈ R d is a fixed vector, z 1 , z 2 , • • • , z n ∈ R d are distributed as N (0, σ 2 z I d ) and constitute the rows of Z ∈ R n×d . Then for any 0 ≤ δ ≤ 3 2 the random variable D = Z T φ (Zw) 2 satisfies (1 -δ) E D 2 ≤ D 2 ≤ (1 + δ) E D 2 with probability at least 1-2 e -nδ 2 18 + e -dδ 2 54 + e -c1nδ where c 1 is a positive constant. Moreover we have E D 2 = σ 2 z nd 2 + n(n -1) 2π . Furthermore, using the above equation we have E J (W )J (W ) T = σ 2 z d + n-1 π 2n V V T .

A.5 LEMMAS REGARDING THE INITIAL MISFIT AND THE SPECTRUM OF THE JACOBIAN

In this section, we state some lemmas regarding the spectrum of the Jacobian mapping and the initial misfit, and defer their proofs to Appendix B. First, we state a result on the minimum singular value of the Jacobian mapping at initialization. Lemma A.7 (Minimum singular value of the Jacobian at initialization) Consider our GAN model with a linear discriminator and the generator being a one hidden layer neural network of the form z V φ(W z), where we have n independent data points z 1 , z 2 , • • • , z n ∈ R d distributed as N (0, σ 2 z I d ) and aggregated as the rows of a matrix Z ∈ R n×d , and V ∈ R m×k has i.i.d N (0, σ 2 v ) entries. We also assume that W 0 ∈ R k×d has i.i.d N (0, σ 2 w ) entries and all entries of W 0 , V , and Z are independent. Then the Jacobian matrix at the initialization point obeys σ min (J (W 0 )) ≥ (1 -δ) 2 k -(1 + δ) 2 - √ m (1 + η) (1 + δ) σ v σ z d + n-1 π 2n , 0 ≤ δ ≤ 3 2 with probability at least 1 -3e -η 2 m 8 -2k • e -nδ 2 18 + e -dδ 2 54 + e -c1nδ , where c 1 is a positive constant. Next lemma helps us bound the spectral norm of the Jacobian at initialization, which will be used later to derive upper bounds on Jacobian at every point near initialization. Lemma A.8 (spectral norm of the Jacobian at initialization) Following the setup of previous lemma, the operator norm of the Jacobian matrix at initialization point W 0 ∈ R k×d satisfies J (W 0 ) ≤ (1 + δ) σ v σ z √ k + 2 √ m d + n-1 π 2n , 0 ≤ δ ≤ 3 2 with probability at least 1 -e -m 2 -k • e -nδ 2 18 + e -dδ 2 54 + e -c1nδ , with c 1 a positive constant. The next lemma is adapted from Van Veen et al. ( 2018) and allows us to bound the variations in the Jacobian matrix around initialization. Lemma A.9 (single-sample Jacobian perturbation) Let V ∈ R m×k be a matrix with i.i.d. N 0, σ 2 v entries, W ∈ R k×d , and define the Jacobian mapping J (W ; z) = (V diag (φ (W z))) ⊗ z T . Then, by taking W 0 to be a random matrix with i.i.d. N 0, σ 2 w entries, we have J (W ; z) -J (W 0 ; z) ≤ σ v z 2     2 √ m + 6 2kR σ w 2 3 log    k 3 2kR σw 2 3        for all W ∈ R k×d obeying W -W 0 ≤ R with probability at least 1 -e -m 2 -e -( σw ) 2 3 6 . Our final key lemma bounds the initial misfit f (W 0 ) -y := 1 n n i=1 V φ (W 0 z i ) -x. Lemma A.10 (Initial misfit) Consider our GAN model with a linear discriminator and the generator being a one hidden layer neural network of the form z V φ(W z), where we have n independent data points z 1 , z 2 , • • • , z n ∈ R d distributed as N (0, σ 2 z I d ) and aggregated as the rows of a matrix Z ∈ R n×d , and V ∈ R m×k has i.i.d N (0, σ 2 v ) entries. We also assume that the initial W 0 ∈ R k×d has i.i.d N (0, σ 2 w ) entries. Then the following event 1 n n i=1 V φ (W 0 z i ) -x 2 ≤ (1 + δ) 1 √ 2π σ v σ w σ z √ kdm + x 2 , 0 ≤ δ ≤ 3 holds with probability at least 1 -k • e -c2n(δ/27) 2 + e -(δ/9) 2 m 2 + e -(δ/3) 2 kd 2 , with c 2 a fixed constant. A.6 PROOF OF THEOREM 2.1 In this section, we prove Theorem 2.1 by using our general meta Theorem A.4. To do this we need to check that Assumptions 1-3 are satisfied with high probability. Specifically, in our case the parameter θ is the matrix W and the non-linear mapping f is given by f (W ) = 1 n n i=1 V φ (W z i ). We note that in our result d 0 = 0 and thus z 0 2 = r 0 2 , which simplifies our analysis. To prove Assumption 1 note that by setting δ = 1 2 and η = 1 3 in Lemma A.7, we have σ min (J (W 0 )) ≥ σ v σ z 1 2 √ k -9 -2 √ m d + n-1 π 2n =: α.

This holds with probability at least

1 -3e -m 72 -4k • e -c•n -2k • e -d 216 , concluding the proof of Assumption 1. Next, by setting δ = 1 2 in Lemma A.8 we have J (W 0 ) ≤ ζ := 3 2 σ v σ z √ k + 2 √ m d + n-1 π 2n with probability at least 1 -e -m 2 -2k • e -c•n -k • e -d 216 . Now to bound spectral norm of Jacobian at W where W -W 0 ≤ R (the value of R is defined in the proof of assumption 3 below), we use triangle inequality to get J (W ) ≤ J (W 0 ) + J (W ) -J (W 0 ) . This last inequality together with assumption 3, which we will prove below, yields J (W ) ≤ J (W 0 ) + ≤ J (W 0 ) + α 2 4γβ ≤ J (W 0 ) + J (W 0 ) 2 4β . Therefore by choosing β = 2ζ we arrive at J (W ) ≤ J (W 0 ) + J (W 0 ) 2 4β = J (W 0 ) + J (W 0 ) 2 8ζ ≤ J (W 0 ) + J (W 0 ) 2 8 J (W 0 ) ≤ 2 J (W 0 ) ≤ 2ζ = β, establishing that assumption 2 holds with β = 3σ v σ z √ k + 2 √ m d + n-1 π 2n with probability at least 1 -e -m 2 -2k • e -c•n -k • e -d 216 . Finally to show that Assumption 3 holds, we use the single-sample Jacobian perturbation result of Lemma A.9 combined with the triangle inequality to conclude that J (W ) -J (W 0 ) = 1 n n i=1 J (W ; z i ) -J (W 0 ; z i ) ≤ 1 n n i=1 J (W ; z i ) -J (W 0 ; z i ) ≤ σ v n n i=1 z i 2     2 √ m + 6 2kR σ w 2 3 log    k 3 2kR σw 2 3        (i) ≤ σ v Z F √ n     2 √ m + 6 2kR σ w 2 3 log    k 3 2kR σw 2 3        (ii) ≤ 5 4 σ v σ z √ d     2 √ m + 6 2kR σ w 2 3 log    k 3 2kR σw 2 3        , where (i) holds by Cauchy-Schwarz inequality, and (ii) holds because for a Gaussian matrix Z ∈ R n×d with N (0, σ 2 z ) entries the following holds P Z F ≤ 5 4 σ z √ nd ≥ P Z 2 F ≤ 3 2 σ 2 z nd ≥ 1 -e -nd 24 . Now we set = αfoot_4 4γβ and show that Assumption 3 holds with this choice of and with radius R, whose value will be defined later in the proof. First, note that = α 2 4γβ = σ 2 v σ 2 z 1 2 √ k -9 -2 √ m 2 d+ n-1 π 2n 12γσ v σ z √ k + 2 √ m d+ n-1 π 2n (i) ≥ σ v σ z 1 8 √ k 2 • 1 4π 60 3 √ k ≥ σ v σ z √ k 42000 , where (i) holds by assuming k ≥ C • m with C being a large positive constant. Combining the last inequality with equation 35, we observe that a sufficient condition for assumption 3 to hold is 5 4 σ v σ z √ d     2 √ m + 6 2kR σ w 2 3 log    k 3 2kR σw 2 3        ≤ σ v σ z √ k 42000 , which is equivalent to 105000 √ md + 52500 • √ d • 6 2kR σ w 2 3 log    k 3 2kR σw 2 3    ≤ √ k. Now the first term in the L.H.S. is upper bounded by 1 2 √ k if k ≥ (210000) 2 md, and for the second term we need 105000 • √ d • 6 2kR σ w 2 3 log    k 3 2kR σw 2 3    ≤ √ k, which by defining x = 3d 2R σw √ k 2 3 is equivalent to x log d x ≤ 1 2 • 105000 2 . This last inequality holds for x ≤ c log d with c < 1 a sufficiently small positive constant, which translates into R ≤ c σ w √ k (d log d) 3 2 . ( ) So far we have shown that Assumption 3 holds with = α 2 4γβ and with radius R defined as R := c σw √ k (d log d) Published as a conference paper at ICLR 2021 the definition of R in equation 24 to show that R ≤ R: R 2 = γ β 2 α 2 J † 0 0 0 J † 0 z 0 2 + 9 β 2 γ 2 α 4 z 0 2 (i) ≤ γ β 2 α 3 r 0 2 + 9 α 2 4γβ β 2 γ 2 α 4 r 0 2 = γ r 0 2 β 2 α 3 + 9 4 β α 2 (ii) ≤ 20 β 2 α 3 r 0 2 = 20 3σ v σ z √ k + 2 √ m d+ n-1 π 2n 2 σ v σ z 1 2 √ k -9 -2 √ m d+ n-1 π 2n 3 r 0 2 (iii) ≤ C • 1 σ v σ z √ k • 2 3 σ v σ w σ z √ k • d • m + x 2 where (i) holds because J † 0 ≤ 1 α and 4γβ = α 2 , (ii) holds as 1 ≤ β α and as we substitute γ = 5 from Lemma A.3, and (iii) follows from k ≥ C • m and using δ = 1 3 in Lemma A.10. Now a sufficient condition for equation 36 to hold is that . This shows that assumption 3 holds with probability at least with f : R p → R m and y ∈ R m . Suppose the Jacobian mapping associated with f satisfies the following three assumptions. 1 σ v σ z √ k • 2 3 σ v σ w σ z √ k • d • m + x 2 ≤ c σ w √ k (d log d) 3 2 , which is equivalent to 2 3 σ v σ w σ z • (d log d) 3 2 √ k • d • m + (d log d) 3 2 x 2 ≤ c • kσ v σ w σ z . 1 -ne -m 2 - ne -c•md 3 log(d) 2 -k • e - Assumption 1 We assume σ min (J (θ 0 )) ≥ 2α for a fixed point θ 0 ∈ R p . Assumption 2 Let • be a norm dominated by the Frobenius norm i.e. θ ≤ θ F holds for all θ ∈ R p . Fix a point θ 0 and a number R > 0. For any θ satisfying θ -θ 0 ≤ R, we have J (θ) -J (θ 0 ) ≤ α 3 . Assumption 3 We assume for all θ ∈ R p obeying θ -θ 0 ≤ R, we have J (θ) ≤ β. With these assumptions in place we are now ready to state the following result from Oymak & Soltanolkotabi (2020): Theorem A.11 Given θ 0 ∈ R p , suppose assumptions 1, 2, and 3 hold with R = 3 f (θ 0 ) -y 2 α . Then, using a learning rate η ≤ 1 3β 2 , all gradient descent updates obey f (θ τ ) -y 2 ≤ 1 -ηα 2 τ f (θ 0 ) -y 2 . We are going to apply this theorem in our case where the parameter is W , the nonlinear mapping is f (W ) = 1 n n i=1 V φ (W z i ) with φ = ReLU , and the norm • set to the operator norm. Similar to previous part, by using Lemma A.7 we conclude that with probability at least 1 -3e -m 72 -4k • e -c•n -2k • e -d 216 , assumption 1 is satisfied with 2α := σ v σ z 1 2 √ k -9 -2 √ m d + n-1 π 2n . Next we show that assumption 2 is valid for α as defined in the previous line and for radius R defined later. First we note that α 3 ≥ c • σ v σ z • √ k, where the inequality holds by assuming K ≥ C • m for a sufficiently large positive constant C. Now by using equation 35 assumption 2 if 5 4 σ v σ z √ d     2 √ m + 6 2kR σ w 2 3 log   k 3 2kR σw 2 3        ≤ c • σ v σ z • √ k, which is equivalent to C √ md + C √ d • 6 2kR σ w 2 3 log    k 3 2kR σw 2 3    ≤ √ k. The first term in the L.H.S. of the inequality above is upper bounded by  1 2 √ k if k ≥ C • md. For upper bounding the second term it is sufficient to show that C √ d 6 2kR σ w 2 3 log    k 3 • 2kR σw 2 3    ≤ √ k, which by defining x = 3d 2R σw √ k 2 3 is equivalent to x • log d x ≤ C. that R ≤ c • σw √ k (d•log(d)) . Hence up to this point, we have shown that assumption 2 holds with radius R : = c • σw √ k (d•log(d)) , and this implies that it holds for all values of R less than R. Therefore, we work with the definition of R in equation 37 to show that R ≤ R as follows: R = 3 f (θ 0 ) -y 2 α (i) ≤ 3 α 2 3 σ v σ w σ z √ k • d • m + x 2 = 2 2σ v σ w σ z √ k • m • d + 3 x 2 σ v σ z 1 2 √ k -9 -2 √ m d+ n-1 π 2n , where in (i) we used Lemma A.10 with δ = 1 3 . Hence for showing R ≤ R it suffices to show that 2 2σ v σ w σ z √ k • m • d + 3 x 2 σ v σ z 1 2 √ k -9 -2 √ m d+ n-1 π 2n ≤ c • σ w √ k (d • log (d)) which by assuming k ≥ C • m simplifies to σ v σ w σ z (d • log (d)) 3 2 √ k • m • d + (d • log (d)) 3 2 x 2 ≤ C • k • σ v σ w σ z . Now this last inequality holds if k ≥ C • md 4 log (d) 3 and by setting σ v σ w σ z ≥ . Therefore Assumption 2 holds for radius R defined in equation 37 with probability at least 1ne -m 2 -ne -c•md 3 log (d) 2 -k • e -c•n -e -m 1500 -e -kd 162 . Finally to show assumption 3 holds, we note that for all W satisfying W -W 0 ≤ R, where the value of R is defined in equation 37, it holds that J (W ) ≤ J (W 0 ) + J (W ) -J (W 0 ) ≤ J (W 0 ) + α 3 ≤ J (W 0 ) + σ min (J (W 0 )) 6 ≤ 2 J (W 0 ) ≤ 3σ v σ z √ k + 2 √ m d + n-1 π 2n , where the last inequality holds by using lemma A.8, hence establishing that assumption 3 holds with β = 3σ v σ z √ k + 2 √ m d + n-1 π 2n with probability at least 1 -e -m 2 -2k • e -c•n -k • e -d 216 , finishing the proof of Theorem 2.2.

B PROOFS OF THE AUXILIARY LEMMAS

In this section, we first provide a proof of Lemma A.6 and next go over the proofs of the key lemmas stated in Section A.5.

B.1 PROOF OF LEMMA A.6

Recall that J (W )J (W ) T = 1 n 2 n i,j=1 V diag(φ (W z i ))diag(φ (W z j )V T (z T i z j ) = 1 n 2 V diag =1,...,k   n i=1 z i φ (w T z i ) 2 2   V T = 1 n 2 V • D 2 • V T , where D is a diagonal matrix with entries D = n i=1 z i φ (w T z i ) 2 = Z T φ (Zw ) 2 . The matrix Z ∈ R n×d contains the z i 's in its rows. In order to proceed we are going to analyze the entries of the diagonal matrix D 2 . We observe that Z T φ (Zw) 2 2 = (I - ww T ||w|| 2 )Z T φ (Zw) 2 2 A + ww T ||w|| 2 Z T φ (Zw) 2 2 B . First, we compute the expectation of A. We observe that A = n i=1 (I - ww T ||w|| 2 )z i φ (w T z i ) 2 2 . Conditioned on w, I -ww T w 2 z i is distributed as N 0, σ 2 z I -ww T w 2 and w T z i has distribution N 0, σfoot_6 z w 2 . Moreover, these two random variables are independent, because w is in the null space of Iww T ||w|| 2 . This observation yields E(A) = E n i=1 (I - ww T ||w|| 2 )z i φ (w T z i ) 2 2 = n i=1 n j=1 E (I - ww T ||w|| 2 )z i , (I - ww T ||w|| 2 )z j φ (w T z i )φ (w T z j ) = n i=1 E (I - ww T ||w|| 2 )z i holds with probability at least 1 -2e -nδ 2 18 -2e -dδ 2 54 . In order to analyze B, we first note that B = ww T ||w|| 2 Z T φ (Zw) 2 2 = w T ||w|| Z T φ (Zw) 2 = Z w ||w|| , φ (Zw) 2 = g, φ (||w||g) 2 = n i=1 g i • 1 (gi≥0) 2 = n i=1 ReLU (g i ) 2 , where g i = z T i w w ∼ N 0, σ 2 z . It follows that E (B) = E n i=1 ReLU (g i ) 2 = n i=1 E ReLU 2 (g i ) + i =j E ReLU (g i )ReLU (g j ) = σ 2 z n 2 + n(n -1) 2π , which results in E D 2 = E (A) + E (B) = σ 2 z nd 2 + n(n -1) 2π , 1 ≤ ≤ k. Next, in order to show that B concentrates around its mean, we note that ReLU (g i ) is a sub-Gaussian random variable with ψ 2 -norm Cσ z , where C is a fixed constant. Therefore X = n i=1 ReLU (g i ) is sub-Gaussian with ψ 2 -norm C √ nσ z . By the sub-exponential tail bound for X 2 -E(X 2 ) we obtain P |B -E (B)| ≥ t ≤ 2e -c t nσ 2 z . Finally by putting these results together and using union bounds we have P D 2 -E D 2 ≥ δE D 2 ≤ 2e -nδ 2 18 + 2e -dδ 2 54 + 2e -c1nδ , 0 ≤ δ ≤ 3 2 , finishing the proof of Lemma A.6.

B.2 PROOF OF LEMMA A.7

Our main tool for bounding the minimum singular value of the Jacobian mapping will be the following lemma from Soltanolkotabi (2019):  d 2 i ≥ (1 + δ) 2 E d 2 i ≤ k • P d 2 i ≥ (1 + δ) 2 E d 2 i ≤ k • P d 2 i ≥ (1 + δ) E d 2 i ≤ k • e -nδ 2 18 + e -dδ 2 54 + e -c1nδ , (42) as well as P d 2 ≤ (1 -δ) √ k E [d 2 i ] ≤ P k i=1 d 2 i ≤ (1 -δ) 2 E d 2 i ≤ k • P d 2 i ≤ (1 -δ) 2 E d 2 i ≤ k • P d 2 i ≤ (1 -δ)E d 2 i ≤ k • e -nδ 2 18 + e -dδ 2 54 + e -c1nδ . Finally by replacing η with η d ∞ √ m in equation 41, combined with equation 42 and equation 43, for a random W 0 with i.i.d. N 0, σ 2 w entries we have: σ min (J (W 0 )) = 1 n σ min (V D) ≥ σ v n (1 -δ) 2 k -(1 + δ) 2 - √ m (1 + η) (1 + δ) E [d 2 i ] = (1 -δ) 2 k -(1 + δ) 2 - √ m (1 + η) (1 + δ) σ v σ z d + n-1 π 2n , 0 ≤ δ ≤ 3 2 , with probability at least 1 -3e -η 2 m 8 -2k • e -nδ 2 18 + e -dδ 2 54 + e -c1nδ . This completes the proof of Lemma A.7. A.8 Recall that

B.3 PROOF OF LEMMA

J (W )J (W ) T = 1 n 2 V diag =1,...,k   n i=1 z i φ (w T z i ) 2 2   V T = 1 n 2 V • D 2 • V T , which implies that J (W 0 ) = 1 n V • D ≤ 1 n V D . For matrix V ∈ R m×k with i.i.d N (0, σ 2 v ) the event V ≤ σ v √ k + 2 √ m holds with probability at least 1 -e -m 2 . Regarding matrix D by repeating equation 42 the following event D = max 1≤i≤k D ii ≤ (1 + δ) E[D 2 ii ] = (1 + δ) σ z nd 2 + n(n -1) 2π , 0 ≤ δ ≤ 3 2 holds with probability at least 1 -k • e -nδ 2 18 + e -dδ 2 54 + e -c1nδ . Putting these together it yields that the event J (W 0 ) ≤ (1 + δ) σ v σ z √ k + 2 √ m d + n-1 π 2n , 0 ≤ δ ≤ 3 2 holds with probability at least 1 -e -m 2 -k • e -nδ 2 18 + e -dδ 2 54 + e -c1nδ , finishing the proof of Lemma A.8.

B.4 PROOF OF LEMMA A.10

First, note that if W has i.i.d. N 0, σ 2 w entries and V , W , Z are all independent, then f (W ) 2 = 1 n V φ W Z T 1 n×1 2 has the same distribution as v 2 a 2 , where v ∼ N 0, σ 2 v I m and a = 1 n φ W Z T 1 has independent sub-Gaussian entries, so its 2 -norm is concentrated. Note that conditioned on W , a i = 1 n n j=1 ReLU z T j w i is sub-Gaussian with a i ψ2 = C wi 2 σz √ n , and it is concentrated around Ea i = 1 √ 2π w i 2 σ z . This gives P {a i ≤ (1 + δ)Ea i } ≥ 1 -e -c δ 2 (Ea i ) 2 a i 2 ψ 2 = 1 -e -cnδ 2 , which implies that P a 2 i ≤ (1 + 3δ)(Ea i ) 2 ≥ P a 2 i ≤ (1 + δ) 2 (Ea i ) 2 ≥ 1 -e -cnδ 2 , 0 ≤ δ ≤ 1. Due to the union bound we get that P a 2 2 ≥ (1 + δ) k i=1 (Ea i ) 2 ≤ P k i=1 a 2 i ≥ (1 + δ) (Ea i ) 2 ≤ k i=1 P a 2 i ≥ (1 + δ) (Ea i ) 2 ≤ k • e -cn(δ/3) 2 , 0 ≤ δ ≤ 3. By substituting k i=1 (Ea i ) 2 = 1 2π σ 2 z W 2 F this shows P a 2 ≤ (1 + δ) 1 √ 2π σ z W F ≥ P a 2 2 ≤ (1 + δ) 1 2π σ 2 z W 2 F ≥ 1 -k • e -cn(δ/3) 2 , 0 ≤ δ ≤ 3. We also have the following result for v ∼ N (0, σ 2 v I m ) P v 2 ≤ (1 + δ) σ v √ m ≥ 1 -e -δ 2 m 2 . By combining the above results we obtain P a 2 v 2 ≤ (1 + δ) 1 √ 2π σ v σ z √ m W F ≥ P a 2 v 2 ≤ (1 + δ/3) 2 1 √ 2π σ v σ z √ m W F ≥ 1 -k • e -cn(δ/9) 2 -e -(δ/3) 2 m 2 , 0 ≤ δ ≤ 3. Furthermore, we can bound W F by the tail inequality P W F ≤ (1 + δ) σ w √ kd ≥ 1 -e -δ 2 kd 2 . Hence, by combining the last two results we have that P a 2 v 2 ≤ (1 + δ) 1 √ 2π σ v σ z σ w √ k • d • m ≥ P a 2 v 2 ≤ (1 + δ/3) 2 1 √ 2π σ v σ z σ w √ k • d • m ≥ 1 -k • e -cn(δ/27) 2 -e -(δ/9) 2 m 2 -e -(δ/3) 2 kd 2 , 0 ≤ δ ≤ 3. Therefore, due to the triangle inequality the event f (W 0 ) -x 2 ≤ (1 + δ) 1 √ 2π σ v σ w σ z √ k • d • m + x 2 , 0 ≤ δ ≤ 3 holds with probability at least 1-k•e -c2n(δ/27) 2 -e -(δ/9) 2 m 2 -e -(δ/3) 2 kd 2 for some positive constant c 2 , completing the proof of Lemma A.10.

C ADDITIONAL EXPERIMENTS

Effect of single component overparameterization: In Section 3 of the main paper, we performed experiments in the setting where the size of generator and discriminator are held roughly the same (both discriminator and generator uses the same value of k). In this part, we analyze singlecomponent overparameterization where we study the effect of overparameterization when one of the components (generator / discriminator) has varying k, while the other component uses the standard value of k (64 for DCGAN and 128 for Resnet GAN). The FID variation of single-component overparameterization are shown in Fig. 7 . We observe similar trends as the previous case where increasing overparameterization leads to improved FID scores. Interestingly, increasing the value of k beyond the default value used in the other component leads to a slight drop in performance. Hence, choosing comparable sizes of discriminator and generator models is recommended.

D EXPERIMENTAL DETAILS

The model architectures we use this in the experiments are shown in Figure 8 . In both DCGAN and Resnet-based GANs, the papemeter k controls the number of convolutional filters in each layer. The larger the value of k is, the more overparameterized the models are. Optimization: Both DCGAN and Resnet-based GAN models are optimized using the commonly used hyper-parameters: Adam with learning rate 0.0001 and betas (0.5, 0.999) for DCGAN, gradient penalty of 10 and 5 critic iterations per generator's iteration for both DCGAN and Resnet-based GAN models. Models are trained for 300, 000 iterations with a batch size of 64.

E NEAREST NEIGHBOR VISUALIZATION

In this section, we visualize the nearest neighbors of samples generated using GAN models trained with different levels of overparameterization. More specifically, we trained a DCGAN model with k = 8 and k = 128, synthesize random samples from the trained model and query the nearest neighbors in the training set. The plot of obtained samples is shown in Figure . 10. We observe that overparameterized models generate samples with high diversity. 



In general, the number of observed and generated samples can be different. However, in practical GAN implementations, batch sizes of observed and generated samples are usually the same. Thus, for simplicity, we make this assumption in our setup. The zero initialization of d is merely done for simplicity. A similar result can be derived for an arbitrary initialization of the discriminator's parameters with minor modifications. See Theorem 2.3 for such a result. ,(9) We note that technically the dynamical system equation 16 is not linear. However, we still use exponential stability with some abuse of notation to refer to the property that zt 2 ≤ γλ t z0 2 holds. As we will see in the forth-coming paragraphs, our formal analysis is via a novel perturbation analysis of a linear dynamical system and therefore keeping this terminology is justified. , and we conclude that it holds for any radius R less than R as well. Now we work with , ) means that the random variable takes values 0 and 1 each with probability 1/2.



Figure 1: Overparameterization in GANs. We train DCGAN models by varying the size of the hidden dimension k (larger the k, more overparameterized the models are, see Fig. 8 for details). Overparameterized GANs enjoy improved training and test FID scores (the left panel), generate high-quality samples (the middle panel) and have fast and stable convergence (the right panel).

Figure 2: Convergence plot a GAN model with linear discriminator and 1-hidden layer generator as the hidden dimension (k) increases. Final mse is the mse loss between true data mean and the mean of generated distribution. Over-parameterized models show improved convergence

Figure 4: Overparamterization Results: We plot the FID scores (lower, better) of DCGAN and Resnet DCGAN as the hidden dimension k is varied. Results on CIFAR-10 and Celeb-A are shown on the plots on the left and right panels, respectively. Overparameterization gives better FID scores.

Figure 6: Generalization in GANs: We plot the NND scores as the hidden dimension k is varied for DCGAN (shown in (a)) and Resnet (shown in (b)) models.

Finally, this inequality is satisfied by assuming k ≥ C • md 4 log (d) 3 and setting σ v σ w σ z ≥

Let d ∈ R k be a fixed vector with nonzero entries andD = diag (d) . Also, let A ∈ R k×m have i.i.d. N (0, 1) entries and T ⊆ R m . Define b k (d) = E Dg 2 , where g ∼ N (0, I k ) . Also define σ (T ) := max v∈T v 2 .Then for all u ∈ T we haveDAu 2 -b k (d) u 2 ≤ d ∞ ω (T ) + η with probability at least 1 -6e -η 2 8 d 2 ∞ σ 2 (T ) .and d ∞ , where we have setd i = D ii for 1 ≤ i ≤ k. For 0 ≤ δ ≤ 3 2 we compute that P max 1≤i≤k d i ≥ (1 + δ) E [d 2 i ] = P k i=1

Figure 10: Nearest neighbor visualization. We visualize the nearest neighbor samples in training set for generations from DCGAN model trained on CIFAR-10 dataset. Left panel shows DCGAN trained with k = 8, while the right one shows the one with k = 128. We observe that overparameterized models generate samples with high diversity.

Linear system theory. Prentice-Hall, Inc., 1996. John Von Neumann and Oskar Morgenstern. Theory of games and economic behavior. Princeton university press, 1953. Kun Xu, Chongxuan Li, Huanshu Wei, Jun Zhu, and Bo Zhang. Understanding and stabilizing gans' training dynamics with control theory. arXiv preprint arXiv:1909.13188, 2019. Difan Zou and Quanquan Gu. An improved analysis of training over-parameterized deep neural networks. In Advances in Neural Information Processing Systems 32, pp. 2055-2064, 2019.

Now this last inequality holds if we have x ≤ c log(d) for a sufficiently small constant c, which by rearranging terms amounts to showing

