Video Frame Interpolation
94 papers with code • 20 benchmarks • 12 datasets
The goal of Video Frame Interpolation is to synthesize several frames in the middle of two adjacent frames of the original video. Video Frame Interpolation can be applied to generate slow motion video, increase video frame rate, and frame recovery in video streaming.
Libraries
Use these libraries to find Video Frame Interpolation models and implementationsDatasets
Subtasks
Latest papers
AMT: All-Pairs Multi-Field Transforms for Efficient Frame Interpolation
It is based on two essential designs.
BiFormer: Learning Bilateral Motion Estimation via Bilateral Transformer for 4K Video Frame Interpolation
First, in global motion estimation, we predict symmetric bilateral motion fields at a coarse scale.
Implicit View-Time Interpolation of Stereo Videos using Multi-Plane Disparities and Non-Uniform Coordinates
In this paper, we propose an approach for view-time interpolation of stereo videos.
Joint Video Multi-Frame Interpolation and Deblurring under Unknown Exposure Time
Moreover, on the seemingly implausible x16 interpolation task, our method outperforms existing methods by more than 1. 5 dB in terms of PSNR.
LDMVFI: Video Frame Interpolation with Latent Diffusion Models
Existing works on video frame interpolation (VFI) mostly employ deep neural networks that are trained by minimizing the L1, L2, or deep feature space distance (e. g. VGG loss) between their outputs and ground-truth frames.
MOSO: Decomposing MOtion, Scene and Object for Video Prediction
Experimental results demonstrate that our method achieves new state-of-the-art performance on five challenging benchmarks for video prediction and unconditional video generation: BAIR, RoboNet, KTH, KITTI and UCF101.
MAEVI: Motion Aware Event-Based Video Frame Interpolation
Utilization of event-based cameras is expected to improve the visual quality of video frame interpolation solutions.
Extracting Motion and Appearance via Inter-Frame Attention for Efficient Video Frame Interpolation
In this paper, we propose a novel module to explicitly extract motion and appearance information via a unifying operation.
ST-MFNet Mini: Knowledge Distillation-Driven Frame Interpolation
Currently, one of the major challenges in deep learning-based video frame interpolation (VFI) is the large model sizes and high computational complexity associated with many high performance VFI approaches.
Event-Based Video Frame Interpolation With Cross-Modal Asymmetric Bidirectional Motion Fields
Video Frame Interpolation (VFI) aims to generate intermediate video frames between consecutive input frames.