Video Style Transfer

14 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

A Style-Aware Content Loss for Real-time HD Style Transfer

CompVis/adaptive-style-transfer ECCV 2018

These and our qualitative results ranging from small image patches to megapixel stylistic images and videos show that our approach better captures the subtle nature in which a style affects content.

ReCoNet: Real-time Coherent Video Style Transfer Network

safwankdb/ReCoNet-PyTorch 3 Jul 2018

Image style transfer models based on convolutional neural networks usually suffer from high temporal inconsistency when applied to videos.

AdaAttN: Revisit Attention Mechanism in Arbitrary Neural Style Transfer

huage001/adaattn ICCV 2021

Finally, the content feature is normalized so that they demonstrate the same local feature statistics as the calculated per-point weighted style feature statistics.

Layered Neural Atlases for Consistent Video Editing

ykasten/layered-neural-atlases 23 Sep 2021

We present a method that decomposes, or "unwraps", an input video into a set of layered 2D atlases, each providing a unified representation of the appearance of an object (or background) over the video.

Creative Flow+ Dataset

creativefloworg/creativeflow CVPR 2019

We present the Creative Flow+ Dataset, the first diverse multi-style artistic video dataset richly labeled with per-pixel optical flow, occlusions, correspondences, segmentation labels, normals, and depth.

Consistent Video Style Transfer via Relaxation and Regularization

daooshee/ReReVST-Code 23 Sep 2020

In this article, we address the problem by jointly considering the intrinsic properties of stylization and temporal consistency.

CCPL: Contrastive Coherence Preserving Loss for Versatile Style Transfer

JarrentWu1031/CCPL 11 Jul 2022

CCPL can preserve the coherence of the content source during style transfer without degrading stylization.

VToonify: Controllable High-Resolution Portrait Video Style Transfer

williamyang1991/vtoonify 22 Sep 2022

Although a series of successful portrait image toonification models built upon the powerful StyleGAN have been proposed, these image-oriented methods have obvious limitations when applied to videos, such as the fixed frame size, the requirement of face alignment, missing non-facial details and temporal inconsistency.

FateZero: Fusing Attentions for Zero-shot Text-based Video Editing

chenyangqiqi/fatezero ICCV 2023

We also have a better zero-shot shape-aware editing ability based on the text-to-video model.

CAP-VSTNet: Content Affinity Preserved Versatile Style Transfer

linfengWen98/CAP-VSTNet CVPR 2023

Content affinity loss including feature and pixel affinity is a main problem which leads to artifacts in photorealistic and video style transfer.