Synthetic-to-Real Translation

55 papers with code • 4 benchmarks • 5 datasets

Synthetic-to-real translation is the task of domain adaptation from synthetic (or virtual) data to real data.

( Image credit: CYCADA )

Libraries

Use these libraries to find Synthetic-to-Real Translation models and implementations

Most implemented papers

Constructing Self-motivated Pyramid Curriculums for Cross-Domain Semantic Segmentation: A Non-Adversarial Approach

lianqing11/pycda ICCV 2019

We propose a new approach, called self-motivated pyramid curriculum domain adaptation (PyCDA), to facilitate the adaptation of semantic segmentation neural networks from synthetic source domains to real target domains.

MLSL: Multi-Level Self-Supervised Learning for Domain Adaptation with Spatially Independent and Semantically Consistent Labeling

engrjavediqbal/MLSL 30 Sep 2019

Thus helping latent space learn the representation even when there are very few pixels belonging to the domain category (small object for example) compared to rest of the image.

Category Anchor-Guided Unsupervised Domain Adaptation for Semantic Segmentation

RogerZhangzz/CAG_UDA NeurIPS 2019

Although there has been a progress in matching the marginal distributions between two domains, the classifier favors the source domain features and makes incorrect predictions on the target domain due to category-agnostic feature alignment.

Unsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision

feipan664/IntraDA CVPR 2020

Finally, to decrease the intra-domain gap, we propose to employ a self-supervised adaptation technique from the easy to the hard split.

DACS: Domain Adaptation via Cross-domain Mixed Sampling

vikolss/DACS 17 Jul 2020

In this paper we address the problem of unsupervised domain adaptation (UDA), which attempts to train on labelled data from one domain (source domain), and simultaneously learn from unlabelled data in the domain of interest (target domain).

Classes Matter: A Fine-grained Adversarial Approach to Cross-domain Semantic Segmentation

JDAI-CV/FADA ECCV 2020

To fully exploit the supervision in the source domain, we propose a fine-grained adversarial learning strategy for class-level feature alignment while preserving the internal structure of semantics across domains.

Learning from Scale-Invariant Examples for Domain Adaptation in Semantic Segmentation

MNaseerSubhani/LSE ECCV 2020

Specifically, we show that semantic segmentation model produces output with high entropy when presented with scaled-up patches of target domain, in comparison to when presented original size images.

Permuted AdaIN: Reducing the Bias Towards Global Statistics in Image Classification

onuriel/PermutedAdaIN CVPR 2021

In the setting of robustness, our method improves on both ImageNet-C and Cifar-100-C for multiple architectures.

Pixel-Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation

kgl-prml/Pixel-Level-Cycle-Association NeurIPS 2020

The conventional solution to this task is to minimize the discrepancy between source and target to enable effective knowledge transfer.

MetaCorrection: Domain-aware Meta Loss Correction for Unsupervised Domain Adaptation in Semantic Segmentation

cyang-cityu/MetaCorrection CVPR 2021

Existing self-training based UDA approaches assign pseudo labels for target data and treat them as ground truth labels to fully leverage unlabeled target data for model adaptation.