ADVERSARIAL PROBLEMS FOR GENERATIVE NET-WORKS

Abstract

We are interested in the design of generative networks. The training of these mathematical structures is mostly performed with the help of adversarial (min-max) optimization problems. We propose a simple methodology for constructing such problems assuring, at the same time, consistency of the corresponding solution. We give characteristic examples developed by our method, some of which can be recognized from other applications and some are introduced here for the first time. We compare various possibilities by applying them to well known datasets using neural networks of different configurations and sizes.

1. INTRODUCTION

The problem we are interested in, can be summarized as follows: We are given two collections of training data {z j } and {x i }. In the first set the samples follow the origin probability density h(z) and in the second the target density f (x). The target density f (x) is considered unknown while h(z) can either be known with the possibility to produce samples z j every time it is necessary or unknown in which case we have a second fixed training set {z j }. Our goal is to design a deterministic transformation G(z) so that the data {y j } produced by applying the transformation y = G(Z) onto {z j } follow the target density f (y). Of course one may wonder whether the proposed problem enjoys any solution, namely, whether there indeed exists a transformation G(z) capable of transforming z into y with the former following the origin density h(z) and the latter the target density f (y). The problem of transforming random vectors has been analyzed in Box & Cox (1964) where existence is shown under general conditions. Computing, however, the actual transformation is a completely different challenge with one of the possible solutions relying on adversarial approaches applied to neural networks. The most well known usage of this result is the possibility to generate synthetic data that follow the unknown target density f (x). In this case h(z) is selected to be simple (e.g. i.i.d. standard Gaussian or i.i.d. uniform) so that generating realizations from h(z) is straightforward. As mentioned, the adversarial approach can be applied even if the origin density h(z) is unknown provided that we have a dataset {z j } with data following the origin density. 2014) that first introduced the idea of adversarial (min-max) optimization and demonstrated that it results in the determination of the desired transformation G(z) (consistency). Alternative adversarial approaches were subsequently suggested by Martin Arjovsky & Bottou (2017); Bińkowski et al. (2018) and shown to also deliver the correct transformation G(z).

It was Goodfellow et al. (

We must mention the work of Nowozin et al. (2016) in which a class of min-max optimizations, f-GANs, was defined to design generator/discriminator pairs. Then, Liu et al. ( 2017) defined the adversarial divergences class of objective function which further combined f-GANs, MMD-GAN (Li et al., 2017) , WGAN, WGAN-GP (Gulrajani et al., 2017) , and entropic regularized optimal transport problems. Also, they investigated under what conditions the discriminator's class has the effect of matching generalized moments. Next, the work of Song & Ermon (2019) connected f-GANs and Wasserstein GANs (WGANs) (Martin Arjovsky & Bottou, 2017), and later Birrell et al. (2020) generalized the results by introducing the (f, Γ)-divergencies which allowed to bridge fdivergencies and integral probability metrics. Our class of generative adversarial problems establishes a one-to-one correspondence with f-gans under the ideal (non data-driven) setup. However, we believe that our approach enjoys certain signif-1

