LSA-T: The first continuous Argentinian Sign Language dataset for Sign Language Translation

Sign language translation (SLT) is an active field of study that encompasses human-computer interaction, computer vision, natural language processing and machine learning. Progress on this field could lead to higher levels of integration of deaf people. This paper presents, to the best of our knowledge, the first continuous Argentinian Sign Language (LSA) dataset. It contains 14,880 sentence level videos of LSA extracted from the CN Sordos YouTube channel with labels and keypoints annotations for each signer. We also present a method for inferring the active signer, a detailed analysis of the characteristics of the dataset, a visualization tool to explore the dataset and a neural SLT model to serve as baseline for future experiments.

PDF Abstract

Datasets


Introduced in the Paper:

LSA-T

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Sign Language Translation LSA-T Keypoints-Transformer-UNLP Word Error Rate (WER) 0.9392 # 1

Methods


No methods listed for this paper. Add relevant methods here