no code implementations • 3 Feb 2024 • Bolin Chen, Shanzhi Yin, Peilin Chen, Shiqi Wang, Yan Ye
Artificial Intelligence Generated Content (AIGC) is leading a new technical revolution for the acquisition of digital content and impelling the progress of visual compression towards competitive performance gains and diverse functionalities over traditional codecs.
1 code implementation • 5 Nov 2023 • Bolin Chen, Jie Chen, Shiqi Wang, Yan Ye
Generative Face Video Coding (GFVC) techniques can exploit the compact representation of facial priors and the strong inference capability of deep generative models, achieving high-quality face video communication in ultra-low bandwidth scenarios.
no code implementations • 24 Sep 2023 • Binzhe Li, Bolin Chen, Zhao Wang, Shiqi Wang, Yan Ye
In this letter, we envision a new metaverse communication paradigm for virtual avatar faces, and develop the semantic face compression with compact 3D facial descriptors.
2 code implementations • 20 Feb 2023 • Bolin Chen, Zhao Wang, Binzhe Li, Shurun Wang, Shiqi Wang, Yan Ye
In this paper, we propose a novel framework for Interactive Face Video Coding (IFVC), which allows humans to interact with the intrinsic visual representations instead of the signals.
1 code implementation • NeurIPS 2021 • Zheng Chang, Xinfeng Zhang, Shanshe Wang, Siwei Ma, Yan Ye, Xiang Xinguang, Wen Gao
The attention module aims to learn an attention map based on the correlations between the current spatial state and the historical spatial states.
Ranked #18 on Video Prediction on Moving MNIST
no code implementations • 1 Jul 2021 • Shurun Wang, Zhao Wang, Shiqi Wang, Yan Ye
In this paper, we show that the design and optimization of network architecture could be further improved for compression towards machine vision.
no code implementations • 8 Apr 2021 • Zhao Wang, Changyue Ma, Yan Ye
In this paper, we propose a on-line scaling based multi-density attention network for loop filtering in video compression.
no code implementations • 4 Mar 2021 • Changyue Ma, Zhao Wang, Ruling Liao, Yan Ye
The proposed cross channel context model is combined with the joint autoregressive and hierarchical prior entropy model.