Live Stream Temporally Embedded 3D Human Body Pose and Shape Estimation

25 Jul 2022  ·  Zhouping Wang, Sarah Ostadabbas ·

3D Human body pose and shape estimation within a temporal sequence can be quite critical for understanding human behavior. Despite the significant progress in human pose estimation in the recent years, which are often based on single images or videos, human motion estimation on live stream videos is still a rarely-touched area considering its special requirements for real-time output and temporal consistency. To address this problem, we present a temporally embedded 3D human body pose and shape estimation (TePose) method to improve the accuracy and temporal consistency of pose estimation in live stream videos. TePose uses previous predictions as a bridge to feedback the error for better estimation in the current frame and to learn the correspondence between data frames and predictions in the history. A multi-scale spatio-temporal graph convolutional network is presented as the motion discriminator for adversarial training using datasets without any 3D labeling. We propose a sequential data loading strategy to meet the special start-to-end data processing requirement of live stream. We demonstrate the importance of each proposed module with extensive experiments. The results show the effectiveness of TePose on widely-used human pose benchmarks with state-of-the-art performance.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Human Pose Estimation 3DPW TePose (T=6) PA-MPJPE 52.3 # 68
MPJPE 84.6 # 71
MPVPE 100.3 # 54
Acceleration Error 11.4 # 13
3D Human Pose Estimation Human3.6M TePose (T=6 3DPW) Average MPJPE (mm) 68.6 # 275
PA-MPJPE 47.1 # 86
Acceleration Error 12.1 # 13
3D Human Pose Estimation MPI-INF-3DHP TePose (T=6 3DPW) MPJPE 96.2 # 56
PA-MPJPE 63.1 # 13
Acceleration Error 16.7 # 12

Methods


No methods listed for this paper. Add relevant methods here