Video Compression
102 papers with code • 0 benchmarks • 4 datasets
Video Compression is a process of reducing the size of an image or video file by exploiting spatial and temporal redundancies within an image or video frame and across multiple video frames. The ultimate goal of a successful Video Compression system is to reduce data volume while retaining the perceptual quality of the decompressed data.
Source: Adversarial Video Compression Guided by Soft Edge Detection
Benchmarks
These leaderboards are used to track progress in Video Compression
Libraries
Use these libraries to find Video Compression models and implementationsMost implemented papers
Video Compression through Image Interpolation
An ever increasing amount of our digital communication, media consumption, and content creation revolves around videos.
Switchable Temporal Propagation Network
Our approach is based on a temporal propagation network (TPN), which models the transition-related affinity between a pair of frames in a purely data-driven manner.
Targeted Nonlinear Adversarial Perturbations in Images and Videos
We introduce a method for learning adversarial perturbations targeted to individual images or videos.
Deep Kalman Filtering Network for Video Compression Artifact Reduction
In this paper, we model the video artifact reduction task as a Kalman filtering procedure and restore decoded frames through a deep Kalman filtering network.
Feature Map Transform Coding for Energy-Efficient CNN Inference
We analyze the performance of our approach on a variety of CNN architectures and demonstrate that FPGA implementation of ResNet-18 with our approach results in a reduction of around 40% in the memory energy footprint, compared to quantized network, with negligible impact on accuracy.
DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks
The field of video compression has developed some of the most sophisticated and efficient compression algorithms known in the literature, enabling very high compressibility for little loss of information.
SME-Net: Sparse Motion Estimation for Parametric Video Prediction Through Reinforcement Learning
Inspired by the success of sparse motion-based prediction for video compression, we propose a parametric video prediction on a sparse motion field composed of few critical pixels and their motion vectors.
Variable Rate Deep Image Compression with Modulated Autoencoder
Addressing these limitations, we formulate the problem of variable rate-distortion optimization for deep image compression, and propose modulated autoencoders (MAEs), where the representations of a shared autoencoder are adapted to the specific rate-distortion tradeoff via a modulation network.
Deep motion estimation for parallel inter-frame prediction in video compression
Standard video codecs rely on optical flow to guide inter-frame prediction: pixels from reference frames are moved via motion vectors to predict target video frames.
A Unified End-to-End Framework for Efficient Deep Image Compression
Our EDIC method can also be readily incorporated with the Deep Video Compression (DVC) framework to further improve the video compression performance.