Euler-Lagrange Analysis of Generative Adversarial Networks
Siddarth Asokan, Chandra Sekhar Seelamantula; 24(126):1−100, 2023.
Abstract
This study focuses on Generative Adversarial Networks (GANs) and tackles the inherent functional optimization problem from a variational perspective. Specifically, we emphasize the importance of adhering to the Euler-Lagrange conditions when optimizing the generator and discriminator functions, especially when regularizers involving the derivatives of these functions are involved. By examining Wasserstein GANs (WGANs) with a gradient-norm penalty, we demonstrate that the optimal discriminator can be obtained as the solution to a Poisson differential equation. Notably, the optimal discriminator can be determined analytically, eliminating the need to train a neural network. To illustrate this, we employ a Fourier-series approximation to solve the Poisson differential equation. Experimental results on synthesized Gaussian data show that our proposed approach exhibits superior convergence behavior compared to baseline WGAN variants that employ weight-clipping, gradient, or Lipschitz penalties on the discriminator, particularly on low-dimensional data. Additionally, we analyze the truncation error of the Fourier-series approximation and the estimation error of the Fourier coefficients in a high-dimensional setting. We also apply our approach to real-world images, specifically in the context of latent-space prior matching in Wasserstein autoencoders, and provide performance comparisons on benchmark datasets such as MNIST, SVHN, CelebA, CIFAR-10, and Ukiyo-E. Our results demonstrate that our proposed approach achieves comparable reconstruction error and Frechet inception distance with faster convergence and up to a two-fold improvement in image sharpness.
[abs]
[code]