The EgoHands dataset contains 48 Google Glass videos of complex, first-person interactions between two people. The main intention of this dataset is to enable better, data-driven approaches to understanding hands in first-person computer vision. The dataset offers

  • high quality, pixel-level segmentations of hands
  • the possibility to semantically distinguish between the observer’s hands and someone else’s hands, as well as left and right hands
  • virtually unconstrained hand poses as actors freely engage in a set of joint activities
  • lots of data with 15,053 ground-truth labeled hands
Source: Lending A Hand: Detecting Hands and Recognizing Activities in Complex Egocentric Interactions

Papers


Paper Code Results Date Stars

Tasks


Similar Datasets


License


  • Unknown

Modalities


Languages