Video-to-Video Synthesis

8 papers with code • 2 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Video-to-Video Synthesis

NVIDIA/vid2vid NeurIPS 2018

We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e. g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video.

Few-shot Video-to-Video Synthesis

NVlabs/few-shot-vid2vid NeurIPS 2019

To address the limitations, we propose a few-shot vid2vid framework, which learns to synthesize videos of previously unseen subjects or scenes by leveraging few example images of the target at test time.

Deep Video Inpainting

mcahny/Deep-Video-Inpainting CVPR 2019

Video inpainting aims to fill spatio-temporal holes with plausible content in a video.

Deep Blind Video Decaptioning by Temporal Aggregation and Recurrence

shwoo93/video_decaptioning CVPR 2019

Blind video decaptioning is a problem of automatically removing text overlays and inpainting the occluded parts in videos without any input masks.

GANs in computer vision ebook

The-AI-Summer/GANs-in-Computer-Vision ebook 2020

We do hope that this series will provide you a big overview of the field, so that you will not need to read all the literature by yourself, independent of your background on GANs.

Compositional Video Synthesis with Action Graphs

roeiherz/AG2Video 27 Jun 2020

Our generative model for this task (AG2Vid) disentangles motion and appearance features, and by incorporating a scheduling mechanism for actions facilitates a timely and coordinated video generation.

Fast-Vid2Vid: Spatial-Temporal Compression for Video-to-Video Synthesis

fast-vid2vid/fast-vid2vid 11 Jul 2022

In this paper, we present a spatial-temporal compression framework, \textbf{Fast-Vid2Vid}, which focuses on data aspects of generative models.

SketchBetween: Video-to-Video Synthesis for Sprite Animation via Sketches

ribombee/sketchbetween 1 Sep 2022

We propose a problem formulation that more closely adheres to the standard workflow of animation.