Video-to-Video Synthesis
8 papers with code • 2 benchmarks • 1 datasets
Most implemented papers
Video-to-Video Synthesis
We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e. g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video.
Few-shot Video-to-Video Synthesis
To address the limitations, we propose a few-shot vid2vid framework, which learns to synthesize videos of previously unseen subjects or scenes by leveraging few example images of the target at test time.
Deep Video Inpainting
Video inpainting aims to fill spatio-temporal holes with plausible content in a video.
Deep Blind Video Decaptioning by Temporal Aggregation and Recurrence
Blind video decaptioning is a problem of automatically removing text overlays and inpainting the occluded parts in videos without any input masks.
GANs in computer vision ebook
We do hope that this series will provide you a big overview of the field, so that you will not need to read all the literature by yourself, independent of your background on GANs.
Compositional Video Synthesis with Action Graphs
Our generative model for this task (AG2Vid) disentangles motion and appearance features, and by incorporating a scheduling mechanism for actions facilitates a timely and coordinated video generation.
Fast-Vid2Vid: Spatial-Temporal Compression for Video-to-Video Synthesis
In this paper, we present a spatial-temporal compression framework, \textbf{Fast-Vid2Vid}, which focuses on data aspects of generative models.
SketchBetween: Video-to-Video Synthesis for Sprite Animation via Sketches
We propose a problem formulation that more closely adheres to the standard workflow of animation.