Video Super-Resolution

132 papers with code • 15 benchmarks • 13 datasets

Video Super-Resolution is a computer vision task that aims to increase the resolution of a video sequence, typically from lower to higher resolutions. The goal is to generate high-resolution video frames from low-resolution input, improving the overall quality of the video.

( Image credit: Detail-revealing Deep Video Super-Resolution )

Libraries

Use these libraries to find Video Super-Resolution models and implementations

LAVIE: High-Quality Video Generation with Cascaded Latent Diffusion Models

Vchitect/LaVie 26 Sep 2023

To this end, we propose LaVie, an integrated video generation framework that operates on cascaded video latent diffusion models, comprising a base T2V model, a temporal interpolation model, and a video super-resolution model.

732
26 Sep 2023

A Lightweight Recurrent Grouping Attention Network for Video Super-Resolution

karlygzhu/rgan 25 Sep 2023

We design forward feature extraction module and backward feature extraction module to collect temporal information between consecutive frames from two directions.

4
25 Sep 2023

MoTIF: Learning Motion Trajectories with Local Implicit Neural Functions for Continuous Space-Time Video Super-Resolution

sichun233746/motif ICCV 2023

We motivate the use of forward motion from the perspective of learning individual motion trajectories, as opposed to learning a mixture of motion trajectories with backward motion.

27
16 Jul 2023

Spatio-Temporal Perception-Distortion Trade-off in Learned Video SR

kuis-ai-tekalp-research-group/perceptual-vsr 4 Jul 2023

Perception-distortion trade-off is well-understood for single-image super-resolution.

1
04 Jul 2023

EgoVSR: Towards High-Quality Egocentric Video Super-Resolution

chiyich/egovsr 24 May 2023

We explicitly tackle motion blurs in egocentric videos using a Dual Branch Deblur Network (DB$^2$Net) in the VSR framework.

12
24 May 2023

Enhancing Video Super-Resolution via Implicit Resampling-based Alignment

kai422/iart arXiv 2024

We show that bilinear interpolation inherently attenuates high-frequency information while an MLP-based coordinate network can approximate more frequencies.

106
29 Apr 2023

Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models

stability-ai/generative-models CVPR 2023

We first pre-train an LDM on images only; then, we turn the image generator into a video generator by introducing a temporal dimension to the latent space diffusion model and fine-tuning on encoded image sequences, i. e., videos.

22,375
18 Apr 2023

Local-Global Temporal Difference Learning for Satellite Video Super-Resolution

xy-boy/lgtd 10 Apr 2023

To explore the global dependency in the entire frame sequence, a Long-term Temporal Difference Module (L-TDM) is proposed, where the differences between forward and backward segments are incorporated and activated to guide the modulation of the temporal feature, leading to a holistic global compensation.

28
10 Apr 2023

Learning Spatial-Temporal Implicit Neural Representations for Event-Guided Video Super-Resolution

yunfanLu/INR-Event-VSR CVPR 2023

In addition, we collect a real-world dataset with spatially aligned events and RGB frames.

35
24 Mar 2023

Learning Data-Driven Vector-Quantized Degradation Model for Animation Video Super-Resolution

researchmm/vqd-sr ICCV 2023

Existing real-world video super-resolution (VSR) methods focus on designing a general degradation pipeline for open-domain videos while ignoring data intrinsic characteristics which strongly limit their performance when applying to some specific domains (eg., animation videos).

29
17 Mar 2023