Browse > Computer Vision > Video > Video-to-Video Synthesis

Video-to-Video Synthesis

4 papers with code · Computer Vision
Subtask of Video

Learning a mapping function from an input source video to an output video.

Leaderboards

Greatest papers with code

Video-to-Video Synthesis

NeurIPS 2018 NVIDIA/vid2vid

We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e. g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video.

SEMANTIC SEGMENTATION VIDEO PREDICTION VIDEO-TO-VIDEO SYNTHESIS

Few-shot Video-to-Video Synthesis

NeurIPS 2019 NVlabs/few-shot-vid2vid

To address the limitations, we propose a few-shot vid2vid framework, which learns to synthesize videos of previously unseen subjects or scenes by leveraging few example images of the target at test time.

VIDEO-TO-VIDEO SYNTHESIS

Deep Blind Video Decaptioning by Temporal Aggregation and Recurrence

CVPR 2019 mcahny/Deep-Video-Inpainting

Blind video decaptioning is a problem of automatically removing text overlays and inpainting the occluded parts in videos without any input masks.

VIDEO DENOISING VIDEO INPAINTING VIDEO-TO-VIDEO SYNTHESIS

Deep Video Inpainting

CVPR 2019 mcahny/Deep-Video-Inpainting

Video inpainting aims to fill spatio-temporal holes with plausible content in a video.

IMAGE INPAINTING VIDEO DENOISING VIDEO INPAINTING VIDEO-TO-VIDEO SYNTHESIS