Improved Training of Wasserstein GANs

Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only low-quality samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.

PDF Abstract NeurIPS 2017 PDF NeurIPS 2017 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Image Generation CAT 256x256 WGAN-GP FID 155.46 # 3
Image Generation CIFAR-10 WGAN-GP Inception score 7.86 # 58
FID 29.3 # 134
Image Generation CIFAR-10 WGAN-GP (DINOv2) FD 1088.56 # 12
Conditional Image Generation CIFAR-10 WGAN-GP Inception score 8.67 # 9

Methods