Video Compression

102 papers with code • 0 benchmarks • 4 datasets

Video Compression is a process of reducing the size of an image or video file by exploiting spatial and temporal redundancies within an image or video frame and across multiple video frames. The ultimate goal of a successful Video Compression system is to reduce data volume while retaining the perceptual quality of the decompressed data.

Source: Adversarial Video Compression Guided by Soft Edge Detection

Libraries

Use these libraries to find Video Compression models and implementations

Latest papers with no code

NERV++: An Enhanced Implicit Neural Video Representation

no code yet • 28 Feb 2024

Neural fields, also known as implicit neural representations (INRs), have shown a remarkable capability of representing, generating, and manipulating various data types, allowing for continuous data reconstruction at a low memory footprint.

Resolution-Agnostic Neural Compression for High-Fidelity Portrait Video Conferencing via Implicit Radiance Fields

no code yet • 26 Feb 2024

In this paper, we propose a novel low bandwidth neural compression approach for high-fidelity portrait video conferencing using implicit radiance fields to achieve both major objectives.

Distributed Radiance Fields for Edge Video Compression and Metaverse Integration in Autonomous Driving

no code yet • 22 Feb 2024

For autonomous mobility, it enables new possibilities with edge computing and digital twins (DTs) that offer virtual prototyping, prediction, and more.

Analysis of Neural Video Compression Networks for 360-Degree Video Coding

no code yet • 15 Feb 2024

As such, the state-of-the-art H. 266/VVC video coding standard integrates dedicated tools for 360-degree video, and considerable efforts have been put into designing 360-degree projection formats with improved compression efficiency.

A Neural-network Enhanced Video Coding Framework beyond ECM

no code yet • 13 Feb 2024

In this paper, a hybrid video compression framework is proposed that serves as a demonstrative showcase of deep learning-based approaches extending beyond the confines of traditional coding methodologies.

Motion-Adaptive Inference for Flexible Learned B-Frame Compression

no code yet • 13 Feb 2024

As a remedy, we propose controlling the motion range for flow prediction during inference (to approximately match the range of motions in the training data) by downsampling video frames adaptively according to amount of motion and level of hierarchy in order to compress all B-frames using a single flexible-rate model.

Efficient Dynamic-NeRF Based Volumetric Video Coding with Rate Distortion Optimization

no code yet • 2 Feb 2024

Volumetric videos, benefiting from immersive 3D realism and interactivity, hold vast potential for various applications, while the tremendous data volume poses significant challenges for compression.

UCVC: A Unified Contextual Video Compression Framework with Joint P-frame and B-frame Coding

no code yet • 2 Feb 2024

This paper presents a learned video compression method in response to video compression track of the 6th Challenge on Learned Image Compression (CLIC), at DCC 2024. Specifically, we propose a unified contextual video compression framework (UCVC) for joint P-frame and B-frame coding.

LVC-LGMC: Joint Local and Global Motion Compensation for Learned Video Compression

no code yet • 1 Feb 2024

To validate the effectiveness of our proposed LGMC, we integrate it with DCVC-TCM and obtain learned video compression with joint local and global motion compensation (LVC-LGMC).

A Neural Enhancement Post-Processor with a Dynamic AV1 Encoder Configuration Strategy for CLIC 2024

no code yet • 31 Jan 2024

At practical streaming bitrates, traditional video compression pipelines frequently lead to visible artifacts that degrade perceptual quality.