Unsupervised Domain Adaptation using Generative Models and Self-ensembling

2 Dec 2018 Eman T. Hassan Xin Chen David Crandall

Transferring knowledge across different datasets is an important approach to successfully train deep models with a small-scale target dataset or when few labeled instances are available. In this paper, we aim at developing a model that can generalize across multiple domain shifts, so that this model can adapt from a single source to multiple targets... (read more)

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Batch Normalization
Normalization
Residual Connection
Skip Connections
PatchGAN
Discriminators
ReLU
Activation Functions
Tanh Activation
Activation Functions
Residual Block
Skip Connection Blocks
Instance Normalization
Normalization
Convolution
Convolutions
Leaky ReLU
Activation Functions
Sigmoid Activation
Activation Functions
GAN Least Squares Loss
Loss Functions
Cycle Consistency Loss
Loss Functions
CycleGAN
Generative Models