Video Super-Resolution
132 papers with code • 15 benchmarks • 13 datasets
Video Super-Resolution is a computer vision task that aims to increase the resolution of a video sequence, typically from lower to higher resolutions. The goal is to generate high-resolution video frames from low-resolution input, improving the overall quality of the video.
( Image credit: Detail-revealing Deep Video Super-Resolution )
Libraries
Use these libraries to find Video Super-Resolution models and implementationsDatasets
Latest papers with no code
Space-Time Video Super-resolution with Neural Operator
This paper addresses the task of space-time video super-resolution (ST-VSR).
Translation-based Video-to-Video Synthesis
Translation-based Video Synthesis (TVS) has emerged as a vital research area in computer vision, aiming to facilitate the transformation of videos between distinct domains while preserving both temporal continuity and underlying content features.
Learning Spatial Adaptation and Temporal Coherence in Diffusion Models for Video Super-Resolution
Technically, SATeCo freezes all the parameters of the pre-trained UNet and VAE, and only optimizes two deliberately-designed spatial feature adaptation (SFA) and temporal feature alignment (TFA) modules, in the decoder of UNet and VAE.
Time-series Initialization and Conditioning for Video-agnostic Stabilization of Video Super-Resolution using Recurrent Networks
The proposed training strategy stabilizes VSR by training a VSR network with various RNN hidden states changed depending on the video properties.
Inflation with Diffusion: Efficient Temporal Adaptation for Text-to-Video Super-Resolution
We propose an efficient diffusion-based text-to-video super-resolution (SR) tuning approach that leverages the readily learned capacity of pixel level image diffusion model to capture spatial information for video generation.
FMA-Net: Flow-Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring
In this paper, we propose a novel flow-guided dynamic filtering (FGDF) and iterative feature refinement with multi-attention (FRMA), which constitutes our VSRDB framework, denoted as FMA-Net.
A Survey on Super Resolution for video Enhancement Using GAN
This compilation of various research paper highlights provides a comprehensive overview of recent developments in super-resolution image and video using deep learning algorithms such as Generative Adversarial Networks.
Photorealistic Video Generation with Diffusion Models
We present W. A. L. T, a transformer-based approach for photorealistic video generation via diffusion modeling.
Upscale-A-Video: Temporal-Consistent Diffusion Model for Real-World Video Super-Resolution
Text-based diffusion models have exhibited remarkable success in generation and editing, showing great promise for enhancing visual content with their generative prior.
FLAIR: A Conditional Diffusion Framework with Applications to Face Video Restoration
Face video restoration (FVR) is a challenging but important problem where one seeks to recover a perceptually realistic face videos from a low-quality input.