Synthetic-to-Real Translation
55 papers with code • 4 benchmarks • 5 datasets
Synthetic-to-real translation is the task of domain adaptation from synthetic (or virtual) data to real data.
( Image credit: CYCADA )
Libraries
Use these libraries to find Synthetic-to-Real Translation models and implementationsMost implemented papers
Constructing Self-motivated Pyramid Curriculums for Cross-Domain Semantic Segmentation: A Non-Adversarial Approach
We propose a new approach, called self-motivated pyramid curriculum domain adaptation (PyCDA), to facilitate the adaptation of semantic segmentation neural networks from synthetic source domains to real target domains.
MLSL: Multi-Level Self-Supervised Learning for Domain Adaptation with Spatially Independent and Semantically Consistent Labeling
Thus helping latent space learn the representation even when there are very few pixels belonging to the domain category (small object for example) compared to rest of the image.
Category Anchor-Guided Unsupervised Domain Adaptation for Semantic Segmentation
Although there has been a progress in matching the marginal distributions between two domains, the classifier favors the source domain features and makes incorrect predictions on the target domain due to category-agnostic feature alignment.
Unsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision
Finally, to decrease the intra-domain gap, we propose to employ a self-supervised adaptation technique from the easy to the hard split.
DACS: Domain Adaptation via Cross-domain Mixed Sampling
In this paper we address the problem of unsupervised domain adaptation (UDA), which attempts to train on labelled data from one domain (source domain), and simultaneously learn from unlabelled data in the domain of interest (target domain).
Classes Matter: A Fine-grained Adversarial Approach to Cross-domain Semantic Segmentation
To fully exploit the supervision in the source domain, we propose a fine-grained adversarial learning strategy for class-level feature alignment while preserving the internal structure of semantics across domains.
Learning from Scale-Invariant Examples for Domain Adaptation in Semantic Segmentation
Specifically, we show that semantic segmentation model produces output with high entropy when presented with scaled-up patches of target domain, in comparison to when presented original size images.
Permuted AdaIN: Reducing the Bias Towards Global Statistics in Image Classification
In the setting of robustness, our method improves on both ImageNet-C and Cifar-100-C for multiple architectures.
Pixel-Level Cycle Association: A New Perspective for Domain Adaptive Semantic Segmentation
The conventional solution to this task is to minimize the discrepancy between source and target to enable effective knowledge transfer.
MetaCorrection: Domain-aware Meta Loss Correction for Unsupervised Domain Adaptation in Semantic Segmentation
Existing self-training based UDA approaches assign pseudo labels for target data and treat them as ground truth labels to fully leverage unlabeled target data for model adaptation.