Generative Models

BigGAN-deep is a deeper version (4x) of BigGAN. The main difference is a slightly differently designed residual block. Here the $z$ vector is concatenated with the conditional vector without splitting it into chunks. It is also based on residual blocks with bottlenecks. BigGAN-deep uses a different strategy than BigGAN aimed at preserving identity throughout the skip connections. In G, where the number of channels needs to be reduced, BigGAN-deep simply retains the first group of channels and drop the rest to produce the required number of channels. In D, where the number of channels should be increased, BigGAN-deep passes the input channels unperturbed, and concatenates them with the remaining channels produced by a 1 × 1 convolution. As far as the network configuration is concerned, the discriminator is an exact reflection of the generator.

There are two blocks at each resolution (BigGAN uses one), and as a result BigGAN-deep is four times deeper than BigGAN. Despite their increased depth, the BigGAN-deep models have significantly fewer parameters mainly due to the bottleneck structure of their residual blocks.

Source: Large Scale GAN Training for High Fidelity Natural Image Synthesis

Papers


Paper Code Results Date Stars

Tasks


Task Papers Share
Image Generation 4 19.05%
Conditional Image Generation 3 14.29%
Bias Detection 2 9.52%
Clustering 2 9.52%
Density Estimation 1 4.76%
Metric Learning 1 4.76%
Relational Reasoning 1 4.76%
Self-Supervised Learning 1 4.76%
Fairness 1 4.76%

Categories