Video Reconstruction
35 papers with code • 9 benchmarks • 8 datasets
Source: Deep-SloMo
Most implemented papers
High Speed and High Dynamic Range Video with an Event Camera
In this work we propose to learn to reconstruct intensity images from event streams directly from data instead of relying on any hand-crafted priors.
Deep Slow Motion Video Reconstruction with Hybrid Imaging System
In this paper, we address this problem using two video streams as input; an auxiliary video with high frame rate and low spatial resolution, providing temporal information, in addition to the standard main video with low frame rate and high spatial resolution.
Reducing the Sim-to-Real Gap for Event Cameras
We present strategies for improving training data for event based CNNs that result in 20-40% boost in performance of existing state-of-the-art (SOTA) video reconstruction networks retrained with our method, and up to 15% for optic flow networks.
Video Reconstruction by Spatio-Temporal Fusion of Blurred-Coded Image Pair
The input to our algorithm is a fully-exposed and coded image pair.
An Asynchronous Kalman Filter for Hybrid Event Cameras
Conversely, conventional image sensors measure absolute intensity of slowly changing scenes effectively but do poorly on high dynamic range or quickly changing scenes.
SeLFVi: Self-Supervised Light-Field Video Reconstruction From Stereo Video
We propose a self-supervised learning-based algorithm for LF video reconstruction from stereo video.
Event-Based Video Reconstruction Using Transformer
Event cameras, which output events by detecting spatio-temporal brightness changes, bring a novel paradigm to image sensors with high dynamic range and low latency.
HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset
Secondly, we conduct more sophisticated alignment and temporal fusion in the feature space of the coarse HDR video to produce better reconstruction.
Event-based Video Reconstruction via Potential-assisted Spiking Neural Network
We propose a novel Event-based Video reconstruction framework based on a fully Spiking Neural Network (EVSNN), which utilizes Leaky-Integrate-and-Fire (LIF) neuron and Membrane Potential (MP) neuron.
Locality-Aware Inter-and Intra-Video Reconstruction for Self-Supervised Correspondence Learning
Our target is to learn visual correspondence from unlabeled videos.