no code implementations • 13 Apr 2022 • Zipeng Ye, Zhiyao Sun, Yu-Hui Wen, Yanan sun, Tian Lv, Ran Yi, Yong-Jin Liu
In this paper, we propose a method to generate talking-face videos with continuously controllable expressions in real-time.
no code implementations • 16 Jan 2022 • Zipeng Ye, Mengfei Xia, Ran Yi, Juyong Zhang, Yu-Kun Lai, Xuwei Huang, Guoxin Zhang, Yong-Jin Liu
In this paper, we present a dynamic convolution kernel (DCK) strategy for convolutional neural networks.
1 code implementation • 15 Mar 2020 • Zipeng Ye, Mengfei Xia, Yanan sun, Ran Yi, MinJing Yu, Juyong Zhang, Yu-Kun Lai, Yong-Jin Liu
The most challenging issue for our system is that the source domain of face photos (characterized by normal 2D faces) is significantly different from the target domain of 3D caricatures (characterized by 3D exaggerated face shapes and textures).
1 code implementation • 24 Feb 2020 • Ran Yi, Zipeng Ye, Juyong Zhang, Hujun Bao, Yong-Jin Liu
In this paper, we address this problem by proposing a deep neural network model that takes an audio signal A of a source person and a very short video V of a target person as input, and outputs a synthesized high-quality talking face video with personalized head pose (making use of the visual information in V), expression and lip synchronization (by considering both A and V).
no code implementations • 17 Nov 2019 • Yiheng Han, Wang Zhao, Jia Pan, Zipeng Ye, Ran Yi, Yong-Jin Liu
Motion planning for robots of high degrees-of-freedom (DOFs) is an important problem in robotics with sampling-based methods in configuration space C as one popular solution.
no code implementations • ICCV 2019 • Zipeng Ye, Ran Yi, Minjing Yu, Yong-Jin Liu, Ying He
Our key idea is that for manifold regions in which q-distances are different from geodesic distances, GCVT is prone to placing more generators in them, and therefore after few iterations, the q-distance-induced tessellation is an exact GCVT.