Video Super-Resolution

132 papers with code • 15 benchmarks • 13 datasets

Video Super-Resolution is a computer vision task that aims to increase the resolution of a video sequence, typically from lower to higher resolutions. The goal is to generate high-resolution video frames from low-resolution input, improving the overall quality of the video.

( Image credit: Detail-revealing Deep Video Super-Resolution )

Libraries

Use these libraries to find Video Super-Resolution models and implementations

Latest papers with no code

Space-Time Video Super-resolution with Neural Operator

no code yet • 9 Apr 2024

This paper addresses the task of space-time video super-resolution (ST-VSR).

Translation-based Video-to-Video Synthesis

no code yet • 3 Apr 2024

Translation-based Video Synthesis (TVS) has emerged as a vital research area in computer vision, aiming to facilitate the transformation of videos between distinct domains while preserving both temporal continuity and underlying content features.

Learning Spatial Adaptation and Temporal Coherence in Diffusion Models for Video Super-Resolution

no code yet • 25 Mar 2024

Technically, SATeCo freezes all the parameters of the pre-trained UNet and VAE, and only optimizes two deliberately-designed spatial feature adaptation (SFA) and temporal feature alignment (TFA) modules, in the decoder of UNet and VAE.

Time-series Initialization and Conditioning for Video-agnostic Stabilization of Video Super-Resolution using Recurrent Networks

no code yet • 23 Mar 2024

The proposed training strategy stabilizes VSR by training a VSR network with various RNN hidden states changed depending on the video properties.

Inflation with Diffusion: Efficient Temporal Adaptation for Text-to-Video Super-Resolution

no code yet • 18 Jan 2024

We propose an efficient diffusion-based text-to-video super-resolution (SR) tuning approach that leverages the readily learned capacity of pixel level image diffusion model to capture spatial information for video generation.

FMA-Net: Flow-Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring

no code yet • 8 Jan 2024

In this paper, we propose a novel flow-guided dynamic filtering (FGDF) and iterative feature refinement with multi-attention (FRMA), which constitutes our VSRDB framework, denoted as FMA-Net.

A Survey on Super Resolution for video Enhancement Using GAN

no code yet • 27 Dec 2023

This compilation of various research paper highlights provides a comprehensive overview of recent developments in super-resolution image and video using deep learning algorithms such as Generative Adversarial Networks.

Photorealistic Video Generation with Diffusion Models

no code yet • 11 Dec 2023

We present W. A. L. T, a transformer-based approach for photorealistic video generation via diffusion modeling.

Upscale-A-Video: Temporal-Consistent Diffusion Model for Real-World Video Super-Resolution

no code yet • 11 Dec 2023

Text-based diffusion models have exhibited remarkable success in generation and editing, showing great promise for enhancing visual content with their generative prior.

FLAIR: A Conditional Diffusion Framework with Applications to Face Video Restoration

no code yet • 26 Nov 2023

Face video restoration (FVR) is a challenging but important problem where one seeks to recover a perceptually realistic face videos from a low-quality input.