Multi-Source Domain Adaptation and Semi-Supervised Domain Adaptation with Focus on Visual Domain Adaptation Challenge 2019

8 Oct 2019 Yingwei Pan Yehao Li Qi Cai Yang Chen Ting Yao

This notebook paper presents an overview and comparative analysis of our systems designed for the following two tasks in Visual Domain Adaptation Challenge (VisDA-2019): multi-source domain adaptation and semi-supervised domain adaptation. Multi-Source Domain Adaptation: We investigate both pixel-level and feature-level adaptation for multi-source domain adaptation task, i.e., directly hallucinating labeled target sample via CycleGAN and learning domain-invariant feature representations through self-learning... (read more)

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Batch Normalization
Normalization
Residual Connection
Skip Connections
PatchGAN
Discriminators
ReLU
Activation Functions
Tanh Activation
Activation Functions
Residual Block
Skip Connection Blocks
Instance Normalization
Normalization
Convolution
Convolutions
Leaky ReLU
Activation Functions
Sigmoid Activation
Activation Functions
GAN Least Squares Loss
Loss Functions
Cycle Consistency Loss
Loss Functions
CycleGAN
Generative Models