LCCM-VC: Learned Conditional Coding Modes for Video Compression

28 Oct 2022  ·  Hadi Hadizadeh, Ivan V. Bajić ·

End-to-end learning-based video compression has made steady progress over the last several years. However, unlike learning-based image coding, which has already surpassed its handcrafted counterparts, learning-based video coding still has some ways to go. In this paper we present learned conditional coding modes for video coding (LCCM-VC), a video coding model that achieves state-of-the-art results among learning-based video coding methods. Our model utilizes conditional coding engines from the recent conditional augmented normalizing flows (CANF) pipeline, and introduces additional coding modes to improve compression performance. The compression efficiency is especially good in the high-quality/high-bitrate range, which is important for broadcast and video-on-demand streaming applications. The implementation of LCCM-VC is available at https://github.com/hadihdz/lccm_vc

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods