Multimodal Unsupervised Image-To-Image Translation

14 papers with code • 6 benchmarks • 4 datasets

Multimodal unsupervised image-to-image translation is the task of producing multiple translations to one domain from a single image in another domain.

( Image credit: MUNIT: Multimodal UNsupervised Image-to-image Translation )

Libraries

Use these libraries to find Multimodal Unsupervised Image-To-Image Translation models and implementations

Latest papers with no code

Combining Noise-to-Image and Image-to-Image GANs: Brain MR Image Augmentation for Tumor Detection

no code yet • 31 May 2019

In this context, Generative Adversarial Networks (GANs) can synthesize realistic/diverse additional training images to fill the data lack in the real image distribution; researchers have improved classification by augmenting data with noise-to-image (e. g., random noise samples to diverse pathological images) or image-to-image GANs (e. g., a benign image to a malignant one).

Semi-Supervised Image-to-Image Translation

no code yet • 24 Jan 2019

The advantage of using such an approach is that the image-to-image translation is semi-supervised, independant of image segmentation and inherits the properties of generative adversarial networks tending to produce realistic.

Latent Filter Scaling for Multimodal Unsupervised Image-to-Image Translation

no code yet • CVPR 2019

In multimodal unsupervised image-to-image translation tasks, the goal is to translate an image from the source domain to many images in the target domain.