49 papers with code • 2 benchmarks • 2 datasets
Unsupervised image-to-image translation is the task of doing image-to-image translation without ground truth image-to-image pairings.
Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs.
Ranked #1 on Image-to-Image Translation on photo2vangogh (Frechet Inception Distance metric)
To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain.
Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains.
We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner.
Ranked #1 on Image-to-Image Translation on photo2portrait
Unsupervised image-to-image translation intends to learn a mapping of an image in a given domain to an analogous image in a different domain, without explicit supervision of the mapping.
Domain adaptation is critical for success in new, unseen environments.
Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary).
Ranked #1 on Domain Adaptation on UCF-to-Olympic