Unsupervised image-to-image translation is the task of doing image-to-image translation without ground truth image-to-image pairings.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Unsupervised image-to-image translation is an inherently ill-posed problem.
In this paper, we tackle image-to-image translation in a fully unsupervised setting, i. e., neither paired images nor domain labels.
(2) Since it does not need to support the cycle constraint, no irrelevant traces of the input are left on the generated image.
With TuiGAN, an image is translated in a coarse-to-fine manner where the generated image is gradually refined from global structures to local details.
The instability in GAN training has been a long-standing problem despite remarkable research efforts.
Ranked #1 on Conditional Image Generation on CIFAR-100
We present the high-resolution daytime translation (HiDT) model for this task.
The proposed architecture, termed as NICE-GAN, exhibits two advantageous patterns over previous approaches: First, it is more compact since no independent encoding component is required; Second, this plug-in encoder is directly trained by the adversary loss, making it more informative and trained more effectively if a multi-scale discriminator is applied.
Instead of executing translation directly, we steer the translation by requiring the network to produce in-between images that resemble weighted hybrids between images from the input domains.
The goal of unsupervised image-to-image translation is to map images from one domain to another without the ground truth correspondence between the two domains.