|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Finally, the two input images are warped and linearly fused to form each intermediate frame.
The proposed model then warps the input frames, depth maps, and contextual features based on the optical flow and local interpolation kernels for synthesizing the output frame.
Our method develops a deep fully convolutional neural network that takes two input frames and estimates pairs of 1D kernels for all pixels simultaneously.
#3 best model for Video Frame Interpolation on Middlebury
Many video enhancement algorithms rely on optical flow to register frames in a video sequence.
#4 best model for Video Frame Interpolation on Vimeo90k
In addition to the cycle consistency loss, we propose two extensions: motion linearity loss and edge-guided training.
As Deep Neural Networks are becoming more popular, much of the attention is being devoted to Computer Vision problems that used to be solved with more traditional approaches.
In this work, we propose a motion estimation and motion compensation driven neural network for video frame interpolation.
#2 best model for Video Frame Interpolation on Middlebury
Recently, a number of data-driven frame interpolation methods based on convolutional neural networks have been proposed.
#2 best model for Video Frame Interpolation on Vimeo90k
In this paper, we first propose a joint VFI-SR framework for up-scaling the spatio-temporal resolution of videos from 2K 30 fps to 4K 60 fps.
We further introduce a pseudo supervised loss term that enforces the interpolated frames to be consistent with predictions of a pre-trained interpolation model.