Video Style Transfer

14 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Two Birds, One Stone: A Unified Framework for Joint Learning of Image and Video Style Transfers

NevSNev/UniST ICCV 2023

Current arbitrary style transfer models are limited to either image or video domains.

Style-A-Video: Agile Diffusion for Arbitrary Text-based Video Style Transfer

haha-lisa/style-a-video 9 May 2023

Large-scale text-to-video diffusion models have demonstrated an exceptional ability to synthesize diverse videos.

Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models

weifeng-chen/control-a-video 23 May 2023

Based on a pre-trained conditional text-to-image (T2I) diffusion model, our model aims to generate videos conditioned on a sequence of control signals, such as edge or depth maps.

WAIT: Feature Warping for Animation to Illustration video Translation using GANs

giddyyupp/wait 7 Oct 2023

Current state-of-the-art video-to-video translation models rely on having a video sequence or a single style image to stylize an input video.