Video Compression
102 papers with code • 0 benchmarks • 4 datasets
Video Compression is a process of reducing the size of an image or video file by exploiting spatial and temporal redundancies within an image or video frame and across multiple video frames. The ultimate goal of a successful Video Compression system is to reduce data volume while retaining the perceptual quality of the decompressed data.
Source: Adversarial Video Compression Guided by Soft Edge Detection
Benchmarks
These leaderboards are used to track progress in Video Compression
Libraries
Use these libraries to find Video Compression models and implementationsLatest papers
Low-complexity Deep Video Compression with A Distributed Coding Architecture
This has inspired a distributed coding architecture aiming at reducing the encoding complexity.
Neural Video Compression with Diverse Contexts
Better yet, our codec has surpassed the under-developing next generation traditional codec/ECM in both RGB and YUV420 colorspaces, in terms of PSNR.
NIRVANA: Neural Implicit Representations of Videos with Adaptive Networks and Autoregressive Patch-wise Modeling
This design shares computation within each group, in the spatial and temporal dimensions, resulting in reduced encoding time of the video.
FFNeRV: Flow-Guided Frame-Wise Neural Representations for Videos
Neural fields, also known as coordinate-based or implicit neural representations, have shown a remarkable capability of representing, generating, and manipulating various forms of signals.
Scalable Hybrid Learning Techniques for Scientific Data Compression
Data compression is becoming critical for storing scientific data because many scientific applications need to store large amounts of data and post process this data for scientific discovery.
Video compression dataset and benchmark of learning-based video-quality metrics
Video-quality measurement is a critical task in video processing.
Advancing Learned Video Compression with In-loop Frame Prediction
In this paper, we propose an Advanced Learned Video Compression (ALVC) approach with the in-loop frame prediction module, which is able to effectively predict the target frame from the previously compressed frames, without consuming any bit-rate.
Perceptual Video Coding for Machines via Satisfied Machine Ratio Modeling
Each score is derived from machine perceptual differences between original and compressed images.
LCCM-VC: Learned Conditional Coding Modes for Video Compression
End-to-end learning-based video compression has made steady progress over the last several years.
Efficient Cross-Modal Video Retrieval with Meta-Optimized Frames
In turn, the frame-level optimization is through gradient descent using the meta loss of video retrieval model computed on the whole video.