Cascaded deep monocular 3D human pose estimation with evolutionary training data

End-to-end deep representation learning has achieved remarkable accuracy for monocular 3D human pose estimation, yet these models may fail for unseen poses with limited and fixed training data. This paper proposes a novel data augmentation method that: (1) is scalable for synthesizing massive amount of training data (over 8 million valid 3D human poses with corresponding 2D projections) for training 2D-to-3D networks, (2) can effectively reduce dataset bias. Our method evolves a limited dataset to synthesize unseen 3D human skeletons based on a hierarchical human representation and heuristics inspired by prior knowledge. Extensive experiments show that our approach not only achieves state-of-the-art accuracy on the largest public benchmark, but also generalizes significantly better to unseen and rare poses. Code, pre-trained models and tools are available at this HTTPS URL.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Monocular 3D Human Pose Estimation Human3.6M TAG-Net Average MPJPE (mm) 50.9 # 21
Use Video Sequence No # 1
Frames Needed 1 # 1
Need Ground Truth 2D Pose No # 1
Weakly-supervised 3D Human Pose Estimation Human3.6M Li et al. Average MPJPE (mm) 62.9 # 11
Number of Views 1 # 1
Number of Frames Per View 1 # 1
3D Annotations S1 # 1
3D Human Pose Estimation Human3.6M TAG-Net Average MPJPE (mm) 50.9 # 176
Using 2D ground-truth joints No # 2
Multi-View or Monocular Monocular # 1
3D Human Pose Estimation MPI-INF-3DHP EvoSkeleton AUC 46.1 # 51
MPJPE 99.7 # 67
PCK 81.2 # 56

Methods


No methods listed for this paper. Add relevant methods here