Unsupervised image-to-image translation is the task of doing image-to-image translation without ground truth image-to-image pairings.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs.
#2 best model for Multimodal Unsupervised Image-To-Image Translation on EPFL NIR-VIS
To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain.
Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains.
#2 best model for Multimodal Unsupervised Image-To-Image Translation on Cats-and-Dogs
We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner.
Our comparative evaluation demonstrates the effectiveness of the proposed method on different image datasets, in particular, in the aforementioned challenging cases.
Domain adaptation is critical for success in new, unseen environments.
Here, we propose a new approach to domain adaptation in deep architectures that can be trained on large amount of labeled data from the source domain and large amount of unlabeled data from the target domain (no labeled target-domain data is necessary).
SOTA for Domain Adaptation on UCF-to-Olympic