Unsupervised image-to-image translation is the task of doing image-to-image translation without ground truth image-to-image pairings.
|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs.
#2 best model for Image-to-Image Translation on Cityscapes Photo-to-Labels
We propose a novel method for unsupervised image-to-image translation, which incorporates a new attention module and a new learnable normalization function in an end-to-end manner.
To translate an image to another domain, we recombine its content code with a random style code sampled from the style space of the target domain.
We study the problem of transferring a sample in one domain to an analog sample in another domain.
#2 best model for Unsupervised Image-To-Image Translation on SVNH-to-MNIST
Unsupervised image-to-image translation aims at learning a joint distribution of images in different domains by using images from the marginal distributions in individual domains.
#2 best model for Multimodal Unsupervised Image-To-Image Translation on Cats-and-Dogs
Unsupervised image-to-image translation methods learn to map images in a given class to an analogous image in a different class, drawing on unstructured (non-registered) datasets of images.
Unsupervised image-to-image translation has gained considerable attention due to the recent impressive progress based on generative adversarial networks (GANs).
Our comparative evaluation demonstrates the effectiveness of the proposed method on different image datasets, in particular, in the aforementioned challenging cases.
Domain adaptation is critical for success in new, unseen environments.
Current unsupervised image-to-image translation techniques struggle to focus their attention on individual objects without altering the background or the way multiple objects interact within a scene.