Lightweight Hardware Transform Design for the Versatile Video Coding 4K ASIC Decoders

24 Jul 2021  ·  Ibrahim Farhat, Wassim Hamidouche, Adrien Grill, Daniel Ménard, Olivier Déforges ·

Versatile Video Coding (VVC) is the next generation video coding standard finalized in July 2020. VVC introduces new coding tools enhancing the coding efficiency compared to its predecessor High Efficiency Video Coding (HEVC). These new tools have a significant impact on the VVC software decoder complexity estimated to 2 times HEVC decoder complexity. In particular, the transform module includes in VVC separable and non-separable transforms named Multiple Transform Selection (MTS) and Low Frequency Non-Separable Transform (LFNST) tools, respectively. In this paper, we present an area-efficient hardware architecture of the inverse transform module for a VVC decoder. The proposed design uses a total of 64 regular multipliers in a pipelined architecture targeting Application-Specific Integrated Circuit (ASIC) platforms. It consists in a multi-standard architecture that supports the transform modules of recent MPEG standards including Advanced Video Coding (AVC), HEVC and VVC. The architecture leverages all primary and secondary transforms optimisations including butterfly de-composition, coefficients zeroing and the inherent linear relationship between the transforms. The synthesized results show that the proposed method sustains a constant throughput of 1sample per cycle and a constant latency for all block sizes. The proposed hardware inverse transform module operates at 600MHz frequency enabling to decode in real-time 4K video at 30 frames per second in 4:2:2 chroma sub-sampling format. The proposed module has been integrated in an ASIC UHD decoder targeting energy-aware decoding of VVC videos on consumer devices.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here