3D Hand Pose Estimation

64 papers with code • 5 benchmarks • 16 datasets

Libraries

Use these libraries to find 3D Hand Pose Estimation models and implementations

Most implemented papers

3D Hand Pose Estimation using Simulation and Partial-Supervision with a Shared Latent Space

masabdi/LSPS British Machine Vision Conference 2018 2018

In this paper, we propose a novel method that seeks to predict the 3d position of the hand using both synthetic and partially-labeled real data.

MURAUER: Mapping Unlabeled Real Data for Label AUstERity

poier/murauer 23 Nov 2018

In this work, we remove this requirement by learning to map from the features of real data to the features of synthetic data mainly using a large amount of synthetic and unlabeled real data.

Pixel-wise Regression: 3D Hand Pose Estimation via Spatial-form Representation and Differentiable Decoder

IcarusWizard/PixelwiseRegression 6 May 2019

To use our method, we build a model, in which we design a particular SFR and its correlative DD which divided the 3D joint coordinates into two parts, plane coordinates and depth coordinates and use two modules named Plane Regression (PR) and Depth Regression (DR) to deal with them respectively.

Exploiting Spatial-Temporal Relationships for 3D Pose Estimation via Graph Convolutional Networks

vanoracai/Exploiting-Spatial-temporal-Relationships-for-3D-Pose-Estimation-via-Graph-Convolutional-Networks ICCV 2019

Despite great progress in 3D pose estimation from single-view images or videos, it remains a challenging task due to the substantial depth ambiguity and severe self-occlusions.

HandAugment: A Simple Data Augmentation Method for Depth-Based 3D Hand Pose Estimation

wozhangzhaohui/HandAugment 3 Jan 2020

Our method has two main parts: First, We propose a scheme of two-stage neural networks.

Epipolar Transformers

yihui-he/epipolar-transformers CVPR 2020

The intuition is: given a 2D location p in the current view, we would like to first find its corresponding point p' in a neighboring view, and then combine the features at p' with the features at p, thus leading to a 3D-aware feature at p. Inspired by stereo matching, the epipolar transformer leverages epipolar constraints and feature matching to approximate the features at p'.

JGR-P2O: Joint Graph Reasoning based Pixel-to-Offset Prediction Network for 3D Hand Pose Estimation from a Single Depth Image

fanglinpu/JGR-P2O ECCV 2020

The key ideas are two-fold: a) explicitly modeling the dependencies among joints and the relations between the pixels and the joints for better local feature representation learning; b) unifying the dense pixel-wise offset predictions and direct joint regression for end-to-end training.

AWR: Adaptive Weighting Regression for 3D Hand Pose Estimation

Elody-07/AWR-Adaptive-Weighting-Regression 19 Jul 2020

In this paper, we propose an adaptive weighting regression (AWR) method to leverage the advantages of both detection-based and regression-based methods.

Body2Hands: Learning to Infer 3D Hands from Conversational Gesture Body Dynamics

facebookresearch/body2hands CVPR 2021

We demonstrate the efficacy of our method on hand gesture synthesis from body motion input, and as a strong body prior for single-view image-based 3D hand pose estimation.

I2L-MeshNet: Image-to-Lixel Prediction Network for Accurate 3D Human Pose and Mesh Estimation from a Single RGB Image

mks0601/I2L-MeshNet_RELEASE ECCV 2020

Most of the previous image-based 3D human pose and mesh estimation methods estimate parameters of the human mesh model from an input image.