Additionally, we propose a first set of metrics to quantitatively evaluate the accuracy as well as the perceptual quality of the temporal evolution.
We then perform an analysis on the performance and failure cases of several state-of-the-art tracking methods in comparison to our Tracktor.
Ranked #1 on Online Multi-Object Tracking on MOT17
In this paper, we show that proper frame alignment and motion compensation is crucial for achieving high quality results.
Ranked #7 on Video Super-Resolution on Vid4 - 4x upscaling
In this paper, we propose a state-of-the-art video denoising algorithm based on a convolutional neural network architecture.
Ranked #1 on Video Denoising on Set8 sigma10
We propose a novel end-to-end deep neural network that generates dynamic upsampling filters and a residual image, which are computed depending on the local spatio-temporal neighborhood of each pixel to avoid explicit motion compensation.
Ranked #2 on Video Super-Resolution on Vid4 - 4x upscaling
In this paper, we propose a deformable 3D convolution network (D3Dnet) to incorporate spatio-temporal information from both spatial and temporal dimensions for video SR.
The key challenge for video SR lies in the effective exploitation of temporal dependency between consecutive frames.
Ranked #8 on Video Super-Resolution on Vid4 - 4x upscaling
Extensive experiments demonstrate that HR optical flows provide more accurate correspondences than their LR counterparts and improve both accuracy and consistency performance.
Ranked #9 on Video Super-Resolution on Vid4 - 4x upscaling
Most previous fusion strategies either fail to fully utilize temporal information or cost too much time, and how to effectively fuse temporal information from consecutive frames plays an important role in video super-resolution (SR).
In this work, we propose a motion estimation and motion compensation driven neural network for video frame interpolation.
Ranked #5 on Video Frame Interpolation on Middlebury