The How2Sign is a multimodal and multiview continuous American Sign Language (ASL) dataset consisting of a parallel corpus of more than 80 hours of sign language videos and a set of corresponding modalities including speech, English transcripts, and depth. A three-hour subset was further recorded in the Panoptic studio enabling detailed 3D pose estimation.
28 PAPERS • 3 BENCHMARKS
An artificial corpus built using grammatical dependencies rules due to the lack of resources for Sign Language.
1 PAPER • 1 BENCHMARK
The ASL-Phono introduces a novel linguistics-based representation, which describes the signs in the ASLLVD dataset in terms of a set of attributes of the American Sign Language phonology.
0 PAPER • NO BENCHMARKS YET