VoxelTrack: Multi-Person 3D Human Pose Estimation and Tracking in the Wild

5 Aug 2021  ·  Yifu Zhang, Chunyu Wang, Xinggang Wang, Wenyu Liu, Wenjun Zeng ·

We present VoxelTrack for multi-person 3D pose estimation and tracking from a few cameras which are separated by wide baselines. It employs a multi-branch network to jointly estimate 3D poses and re-identification (Re-ID) features for all people in the environment. In contrast to previous efforts which require to establish cross-view correspondence based on noisy 2D pose estimates, it directly estimates and tracks 3D poses from a 3D voxel-based representation constructed from multi-view images. We first discretize the 3D space by regular voxels and compute a feature vector for each voxel by averaging the body joint heatmaps that are inversely projected from all views. We estimate 3D poses from the voxel representation by predicting whether each voxel contains a particular body joint. Similarly, a Re-ID feature is computed for each voxel which is used to track the estimated 3D poses over time. The main advantage of the approach is that it avoids making any hard decisions based on individual images. The approach can robustly estimate and track 3D poses even when people are severely occluded in some cameras. It outperforms the state-of-the-art methods by a large margin on three public datasets including Shelf, Campus and CMU Panoptic.

PDF Abstract

Datasets


Results from the Paper


Ranked #7 on 3D Multi-Person Pose Estimation on Panoptic (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
3D Multi-Person Pose Estimation Campus VoxelTrack PCP3D 96.7 # 7
3D Multi-Person Pose Estimation Panoptic VoxelTrack Average MPJPE (mm) 18.49 # 7
3D Multi-Person Pose Estimation Shelf VoxelTrack PCP3D 97.1 # 14

Methods


No methods listed for this paper. Add relevant methods here