Learning Efficient GANs using Differentiable Masks and co-Attention Distillation

Generative Adversarial Networks (GANs) have been widely-used in image translation, but their high computational and storage costs impede the deployment on mobile devices. Prevalent methods for CNN compression cannot be directly applied to GANs due to the complicated generator architecture and the unstable adversarial training... (read more)

PDF Abstract

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Dropout
Regularization
Concatenated Skip Connection
Skip Connections
Residual Connection
Skip Connections
Instance Normalization
Normalization
Tanh Activation
Activation Functions
Convolution
Convolutions
Sigmoid Activation
Activation Functions
ReLU
Activation Functions
PatchGAN
Discriminators
Batch Normalization
Normalization
Residual Block
Skip Connection Blocks
Pix2Pix
Generative Models
Cycle Consistency Loss
Loss Functions
GAN Least Squares Loss
Loss Functions
Leaky ReLU
Activation Functions
CycleGAN
Generative Models