Egocentric Pose Estimation
7 papers with code • 4 benchmarks • 6 datasets
Latest papers
EventEgo3D: 3D Human Motion Capture from Egocentric Event Streams
In response to the existing limitations, this paper 1) introduces a new problem, i. e., 3D human motion capture from an egocentric monocular event camera with a fisheye lens, and 2) proposes the first approach to it called EventEgo3D (EE3D).
Pose Constraints for Consistent Self-supervised Monocular Depth and Ego-motion
Self-supervised monocular depth estimation approaches suffer not only from scale ambiguity but also infer temporally inconsistent depth maps w. r. t.
Scene-aware Egocentric 3D Human Pose Estimation
To this end, we propose an egocentric depth estimation network to predict the scene depth map from a wide-view egocentric fisheye camera while mitigating the occlusion of the human body with a depth-inpainting network.
Dynamics-Regulated Kinematic Policy for Egocentric Pose Estimation
By comparing the pose instructed by the kinematic model against the pose generated by the dynamics model, we can use their misalignment to further improve the kinematic model.
Estimating Egocentric 3D Human Pose in Global Space
Furthermore, these methods suffer from limited accuracy and temporal instability due to ambiguities caused by the monocular setup and the severe occlusion in a strongly distorted egocentric perspective.
SelfPose: 3D Egocentric Pose Estimation from a Headset Mounted Camera
The quantitative evaluation, on synthetic and real-world datasets, shows that our strategy leads to substantial improvements in accuracy over state of the art egocentric approaches.
Ego-Pose Estimation and Forecasting as Real-Time PD Control
We propose the use of a proportional-derivative (PD) control based policy learned via reinforcement learning (RL) to estimate and forecast 3D human pose from egocentric videos.