Compressed Volumetric Heatmaps for Multi-Person 3D Pose Estimation

In this paper we present a novel approach for bottom-up multi-person 3D human pose estimation from monocular RGB images. We propose to use high resolution volumetric heatmaps to model joint locations, devising a simple and effective compression method to drastically reduce the size of this representation. At the core of the proposed method lies our Volumetric Heatmap Autoencoder, a fully-convolutional network tasked with the compression of ground-truth heatmaps into a dense intermediate representation. A second model, the Code Predictor, is then trained to predict these codes, which can be decompressed at test time to re-obtain the original representation. Our experimental evaluation shows that our method performs favorably when compared to state of the art on both multi-person and single-person 3D human pose estimation datasets and, thanks to our novel compression strategy, can process full-HD images at the constant runtime of 8 fps regardless of the number of subjects in the scene. Code and models available at https://github.com/fabbrimatteo/LoCO .

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract

Results from the Paper


Ranked #6 on 3D Human Pose Estimation on Panoptic (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
3D Human Pose Estimation Human3.6M LoCO Average MPJPE (mm) 51.1 # 179
Using 2D ground-truth joints No # 2
Multi-View or Monocular Monocular # 1
PA-MPJPE 43.4 # 78
3D Human Pose Estimation Panoptic LoCO Average MPJPE (mm) 69 # 6

Methods