Training Generative Adversarial Networks with Limited Data

Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes... (read more)

PDF Abstract NeurIPS 2020 PDF NeurIPS 2020 Abstract

Results from the Paper


 Ranked #1 on Image Generation on CIFAR-10 (Inception score metric)

     Get a GitHub badge
TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK BENCHMARK
Image Generation CIFAR-10 StyleGAN2 ADA Inception score 10.02 # 1
Image Generation CIFAR-10 StyleGAN2-ADA+Tuning FID 2.92 # 5
Conditional Image Generation CIFAR-10 StyleGAN2-ADA Inception score 10.14 # 2
FID 14.73 # 8
Image Generation FFHQ 1024 x 1024 StyleGAN2 ADA+bCR FID 3.55 # 1
Image Generation FFHQ 256 x 256 StyleGAN2 ADA+bCR FID 3.62 # 2

Methods used in the Paper


METHOD TYPE
Softmax
Output Functions
Dot-Product Attention
Attention Mechanisms
Dense Connections
Feedforward Networks
ReLU
Activation Functions
Adam
Stochastic Optimization
SAGAN Self-Attention Module
Attention Modules
Batch Normalization
Normalization
Feedforward Network
Feedforward Networks
Non-Local Operation
Image Feature Extractors
1x1 Convolution
Convolutions
Residual Connection
Skip Connections
Linear Layer
Feedforward Networks
Non-Local Block
Image Model Blocks
Residual Block
Skip Connection Blocks
Truncation Trick
Latent Variable Sampling
Conditional Batch Normalization
Normalization
GAN Hinge Loss
Loss Functions
Early Stopping
Regularization
Spectral Normalization
Normalization
SAGAN
Generative Adversarial Networks
Projection Discriminator
Discriminators
Off-Diagonal Orthogonal Regularization
Regularization
BigGAN
Generative Models
RandAugment
Image Data Augmentation
Path Length Regularization
Regularization
Weight Demodulation
Normalization
Convolution
Convolutions
Leaky ReLU
Activation Functions
R1 Regularization
Regularization
StyleGAN2
Generative Models