(Fusionformer):Exploiting the Joint Motion Synergy with Fusion Network Based On Transformer for 3D Human Pose Estimation

8 Oct 2022  ·  Xinwei Yu, Xiaohua Zhang ·

For the current 3D human pose estimation task, a group of methods mainly learn the rules of 2D-3D projection from spatial and temporal correlation. However, earlier methods model the global features of the entire body joint in the time domain, but ignore the motion trajectory of individual joint. The recent work [29] considers that there are differences in motion between different joints and deals with the temporal relationship of each joint separately. However, we found that different joints show the same movement trends under some specific actions. Therefore, our proposed Fusionformer method introduces a self-trajectory module and a mutual-trajectory module based on the spatio-temporal module .After that, the global spatio-temporal features and local joint trajectory features are fused through a linear network in a parallel manner. To eliminate the influence of bad 2D poses on 3D projections, finally we also introduce a pose refinement network to balance the consistency of 3D projections. In addition, we evaluate the proposed method on two benchmark datasets (Human3.6M, MPI-INF-3DHP). Comparing our method with the baseline method poseformer, the results show an improvement of 2.4% MPJPE and 4.3% P-MPJPE on the Human3.6M dataset, respectively.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Human Pose Estimation Human3.6M Fusionformer (f=9) Average MPJPE (mm) 48.7 # 144
3D Human Pose Estimation MPI-INF-3DHP Fusionformer (f=9) AUC 70 # 15
MPJPE 28.2 # 9
PCK 97.9 # 11

Methods


No methods listed for this paper. Add relevant methods here