Video Enhancement
38 papers with code • 1 benchmarks • 4 datasets
Latest papers
End-To-End Underwater Video Enhancement: Dataset and Model
To fill this gap, we construct the Synthetic Underwater Video Enhancement (SUVE) dataset, comprising 840 diverse underwater-style videos paired with ground-truth reference videos.
FastLLVE: Real-Time Low-Light Video Enhancement with Intensity-Aware Lookup Table
Experimental results on benchmark datasets demonstrate that our method achieves the State-Of-The-Art (SOTA) performance in terms of both image quality and inter-frame brightness consistency.
NNVISR: Bring Neural Network Video Interpolation and Super Resolution into Video Processing Framework
We present NNVISR - an open-source filter plugin for the VapourSynth video processing framework, which facilitates the application of neural networks for various kinds of video enhancing tasks, including denoising, super resolution, interpolation, and spatio-temporal super-resolution.
Neural Image Re-Exposure
In this work, we aim to re-expose the captured photo in post-processing to provide a more flexible way of addressing those issues within a unified framework.
Light-VQA: A Multi-Dimensional Quality Assessment Model for Low-Light Video Enhancement
To this end, we first construct a Low-Light Video Enhancement Quality Assessment (LLVE-QA) dataset in which 254 original low-light videos are collected and then enhanced by leveraging 8 LLVE algorithms to obtain 2, 060 videos in total.
Implicit View-Time Interpolation of Stereo Videos using Multi-Plane Disparities and Non-Uniform Coordinates
In this paper, we propose an approach for view-time interpolation of stereo videos.
VDPVE: VQA Dataset for Perceptual Video Enhancement
Few researchers have specifically proposed a video quality assessment method for video enhancement, and there is also no comprehensive video quality assessment dataset available in public.
Compression-Aware Video Super-Resolution
Videos stored on mobile devices or delivered on the Internet are usually in compressed format and are of various unknown compression parameters, but most video super-resolution (VSR) methods often assume ideal inputs resulting in large performance gap between experimental settings and real-world applications.
Video Object Segmentation-aware Video Frame Interpolation
In this paper, we propose a video object segmentation (VOS)-aware training framework called VOS-VFI that allows VFI models to interpolate frames with more precise object boundaries.
Dancing in the Dark: A Benchmark towards General Low-light Video Enhancement
To address this issue, we design a camera system and collect a high-quality low-light video dataset with multiple exposures and cameras.