LiftFormer: 3D Human Pose Estimation using attention models

1 Sep 2020  ·  Adrian Llopart ·

Estimating the 3D position of human joints has become a widely researched topic in the last years. Special emphasis has gone into defining novel methods that extrapolate 2-dimensional data (keypoints) into 3D, namely predicting the root-relative coordinates of joints associated to human skeletons. The latest research trends have proven that the Transformer Encoder blocks aggregate temporal information significantly better than previous approaches. Thus, we propose the usage of these models to obtain more accurate 3D predictions by leveraging temporal information using attention mechanisms on ordered sequences human poses in videos. Our method consistently outperforms the previous best results from the literature when using both 2D keypoint predictors by 0.3 mm (44.8 MPJPE, 0.7% improvement) and ground truth inputs by 2mm (MPJPE: 31.9, 8.4% improvement) on Human3.6M. It also achieves state-of-the-art performance on the HumanEva-I dataset with 10.5 P-MPJPE (22.2% reduction). The number of parameters in our model is easily tunable and is smaller (9.5M) than current methodologies (16.95M and 11.25M) whilst still having better performance. Thus, our 3D lifting model's accuracy exceeds that of other end-to-end or SMPL approaches and is comparable to many multi-view methods.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Pose Estimation Human3.6M Liftformer (n=243 CPN) Average MPJPE (mm) 44.8 # 2
3D Pose Estimation Human3.6M Liftformer (n=81 CPN) Average MPJPE (mm) 46 # 3
3D Pose Estimation Human3.6M Liftformer (n=27 CPN) Average MPJPE (mm) 48.6 # 4

Methods