Generative Adversarial Networks (GANs) have rapidly gained popularity since their inception in 2014. They are powerful tools for generating realistic and diverse data in various domains, including computer vision. GANs consist of a discriminative network and a generative network that play a Minimax game, revolutionizing the field of generative modeling. In 2018, GANs were recognized as the leading breakthrough technology by the Massachusetts Science and Technology Review.
Since then, numerous advancements have been proposed, resulting in a wide range of GAN variants such as conditional GAN, Wasserstein GAN, CycleGAN, and StyleGAN. This survey provides a comprehensive overview of GANs, covering their latent architecture, validation metrics, and application areas. It also explores recent theoretical developments, highlighting the connection between GANs and Jensen-Shannon divergence, as well as the optimality characteristics of the GAN framework.
The efficiency and model architectures of GAN variants are evaluated, along with the challenges and solutions in training them. The survey also discusses the integration of GANs with newly developed deep learning frameworks like Transformers, Physics-Informed Neural Networks, Large Language models, and Diffusion models.
Finally, the survey addresses several issues in the field and outlines future research directions.