Towards Robust and Smooth 3D Multi-Person Pose Estimation from Monocular Videos in the Wild

ICCV 2023  ·  Sungchan Park, Eunyi You, Inhoe Lee, Joonseok Lee ·

3D pose estimation is an invaluable task in computer vision with various practical applications. Especially, 3D pose estimation for multi-person from a monocular video (3DMPPE) is particularly challenging and is still largely uncharted, far from applying to in-the-wild scenarios yet. We pose three unresolved issues with the existing methods: lack of robustness on unseen views during training, vulnerability to occlusion, and severe jittering in the output. As a remedy, we propose POTR-3D, the first realization of a sequence-to-sequence 2D-to-3D lifting model for 3DMPPE, powered by a novel geometry-aware data augmentation strategy, capable of generating unbounded data with a variety of views while caring about the ground plane and occlusions. Through extensive experiments, we verify that the proposed model and data augmentation robustly generalizes to diverse unseen views, robustly recovers the poses against heavy occlusions, and reliably generates more natural and smoother outputs. The effectiveness of our approach is verified not only by achieving the state-of-the-art performance on public benchmarks, but also by qualitative results on more challenging in-the-wild videos. Demo videos are available at https://www.youtube.com/@potr3d.

PDF Abstract ICCV 2023 PDF ICCV 2023 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Multi-Person Pose Estimation (root-relative) MuPoTS-3D POTR-3D 3DPCK 83.7 # 7
3D Multi-Person Pose Estimation (absolute) MuPoTS-3D POTR-3D 3DPCK 50.9 # 1

Methods


No methods listed for this paper. Add relevant methods here