Video Compression
102 papers with code • 0 benchmarks • 4 datasets
Video Compression is a process of reducing the size of an image or video file by exploiting spatial and temporal redundancies within an image or video frame and across multiple video frames. The ultimate goal of a successful Video Compression system is to reduce data volume while retaining the perceptual quality of the decompressed data.
Source: Adversarial Video Compression Guided by Soft Edge Detection
Benchmarks
These leaderboards are used to track progress in Video Compression
Libraries
Use these libraries to find Video Compression models and implementationsLatest papers with no code
NERV++: An Enhanced Implicit Neural Video Representation
Neural fields, also known as implicit neural representations (INRs), have shown a remarkable capability of representing, generating, and manipulating various data types, allowing for continuous data reconstruction at a low memory footprint.
Resolution-Agnostic Neural Compression for High-Fidelity Portrait Video Conferencing via Implicit Radiance Fields
In this paper, we propose a novel low bandwidth neural compression approach for high-fidelity portrait video conferencing using implicit radiance fields to achieve both major objectives.
Distributed Radiance Fields for Edge Video Compression and Metaverse Integration in Autonomous Driving
For autonomous mobility, it enables new possibilities with edge computing and digital twins (DTs) that offer virtual prototyping, prediction, and more.
Analysis of Neural Video Compression Networks for 360-Degree Video Coding
As such, the state-of-the-art H. 266/VVC video coding standard integrates dedicated tools for 360-degree video, and considerable efforts have been put into designing 360-degree projection formats with improved compression efficiency.
A Neural-network Enhanced Video Coding Framework beyond ECM
In this paper, a hybrid video compression framework is proposed that serves as a demonstrative showcase of deep learning-based approaches extending beyond the confines of traditional coding methodologies.
Motion-Adaptive Inference for Flexible Learned B-Frame Compression
As a remedy, we propose controlling the motion range for flow prediction during inference (to approximately match the range of motions in the training data) by downsampling video frames adaptively according to amount of motion and level of hierarchy in order to compress all B-frames using a single flexible-rate model.
Efficient Dynamic-NeRF Based Volumetric Video Coding with Rate Distortion Optimization
Volumetric videos, benefiting from immersive 3D realism and interactivity, hold vast potential for various applications, while the tremendous data volume poses significant challenges for compression.
UCVC: A Unified Contextual Video Compression Framework with Joint P-frame and B-frame Coding
This paper presents a learned video compression method in response to video compression track of the 6th Challenge on Learned Image Compression (CLIC), at DCC 2024. Specifically, we propose a unified contextual video compression framework (UCVC) for joint P-frame and B-frame coding.
LVC-LGMC: Joint Local and Global Motion Compensation for Learned Video Compression
To validate the effectiveness of our proposed LGMC, we integrate it with DCVC-TCM and obtain learned video compression with joint local and global motion compensation (LVC-LGMC).
A Neural Enhancement Post-Processor with a Dynamic AV1 Encoder Configuration Strategy for CLIC 2024
At practical streaming bitrates, traditional video compression pipelines frequently lead to visible artifacts that degrade perceptual quality.