Video Compression
103 papers with code • 0 benchmarks • 4 datasets
Video Compression is a process of reducing the size of an image or video file by exploiting spatial and temporal redundancies within an image or video frame and across multiple video frames. The ultimate goal of a successful Video Compression system is to reduce data volume while retaining the perceptual quality of the decompressed data.
Source: Adversarial Video Compression Guided by Soft Edge Detection
Benchmarks
These leaderboards are used to track progress in Video Compression
Libraries
Use these libraries to find Video Compression models and implementationsLatest papers
CANF-VC: Conditional Augmented Normalizing Flows for Video Compression
CANF-VC represents a new attempt that leverages the conditional ANF to learn a video generative model for conditional inter-frame coding.
Flexible-Rate Learned Hierarchical Bi-Directional Video Compression With Motion Refinement and Frame-Level Bit Allocation
This paper presents improvements and novel additions to our recent work on end-to-end optimized hierarchical bi-directional video compression to further advance the state-of-the-art in learned video compression.
VCT: A Video Compression Transformer
The resulting video compression transformer outperforms previous methods on standard video compression data sets.
Introducing AV1 Codec-Level Video Steganography
Steganography is the ancient art of concealing messages into data.
An Interactive Annotation Tool for Perceptual Video Compression
We use this tool to collect data in-the-wild (10 videos, 17 users) and utilize the obtained importance maps in the context of x264 coding to demonstrate that the tool can indeed be used to generate videos which, at the same bitrate, look perceptually better through a subjective study - and are 1. 9 times more likely to be preferred by viewers.
End-to-End Learned Block-Based Image Compression with Block-Level Masked Convolutions and Asymptotic Closed Loop Training
CNN operate on entire input images.
Diffusion Probabilistic Modeling for Video Generation
Denoising diffusion probabilistic models are a promising new class of generative models that mark a milestone in high-quality image generation.
Dilated convolutional neural network-based deep reference picture generation for video compression
Motion estimation and motion compensation are indispensable parts of inter prediction in video coding.
Neural Residual Flow Fields for Efficient Video Representations
Inspired by standard video compression algorithms, we propose a neural field architecture for representing and compressing videos that deliberately removes data redundancy through the use of motion information across video frames.
Learning Cross-Scale Weighted Prediction for Efficient Neural Video Compression
Neural video codecs have demonstrated great potential in video transmission and storage applications.