LOGAN: Latent Optimisation for Generative Adversarial Networks

2 Dec 2019  ·  Yan Wu, Jeff Donahue, David Balduzzi, Karen Simonyan, Timothy Lillicrap ·

Training generative adversarial networks requires balancing of delicate adversarial dynamics. Even with careful tuning, training may diverge or end up in a bad equilibrium with dropped modes. In this work, we improve CS-GAN with natural gradient-based latent optimisation and show that it improves adversarial dynamics by enhancing interactions between the discriminator and the generator. Our experiments demonstrate that latent optimisation can significantly improve GAN training, obtaining state-of-the-art performance for the ImageNet ($128 \times 128$) dataset. Our model achieves an Inception Score (IS) of $148$ and an Fr\'echet Inception Distance (FID) of $3.4$, an improvement of $17\%$ and $32\%$ in IS and FID respectively, compared with the baseline BigGAN-deep model with the same architecture and number of parameters.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods