Video Super-Resolution
132 papers with code • 15 benchmarks • 13 datasets
Video Super-Resolution is a computer vision task that aims to increase the resolution of a video sequence, typically from lower to higher resolutions. The goal is to generate high-resolution video frames from low-resolution input, improving the overall quality of the video.
( Image credit: Detail-revealing Deep Video Super-Resolution )
Libraries
Use these libraries to find Video Super-Resolution models and implementationsDatasets
Most implemented papers
Recurrent Back-Projection Network for Video Super-Resolution
We proposed a novel architecture for the problem of video super-resolution.
BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond
Video super-resolution (VSR) approaches tend to have more components than the image counterparts as they need to exploit the additional temporal dimension.
DeFMO: Deblurring and Shape Recovery of Fast Moving Objects
We propose a method that, given a single image with its estimated background, outputs the object's appearance and position in a series of sub-frames as if captured by a high-speed camera (i. e. temporal super-resolution).
Video Enhancement with Task-Oriented Flow
Many video enhancement algorithms rely on optical flow to register frames in a video sequence.
Intra-frame Object Tracking by Deblatting
We propose a novel approach called Tracking by Deblatting based on the observation that motion blur is directly related to the intra-frame trajectory of an object.
Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video Super-Resolution
Rather than synthesizing missing LR video frames as VFI networks do, we firstly temporally interpolate LR frame features in missing LR video frames capturing local temporal contexts by the proposed feature temporal interpolation network.
Designing a Practical Degradation Model for Deep Blind Image Super-Resolution
It is widely acknowledged that single image super-resolution (SISR) methods would not perform well if the assumed degradation model deviates from those in real images.
BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment
We show that by empowering the recurrent framework with the enhanced propagation and alignment, one can exploit spatiotemporal information across misaligned video frames more effectively.
Recurrent Video Restoration Transformer with Guided Deformable Attention
Specifically, RVRT divides the video into multiple clips and uses the previously inferred clip feature to estimate the subsequent clip feature.
Learning for Video Super-Resolution through HR Optical Flow Estimation
Extensive experiments demonstrate that HR optical flows provide more accurate correspondences than their LR counterparts and improve both accuracy and consistency performance.