Evaluation of Manual and Non-manual Components for Sign Language Recognition

The motivation behind this work lies in the need to differentiate between similar signs that differ in non-manual components present in any sign. To this end, we recorded full sentences signed by five native signers and extracted 5200 isolated sign samples of twenty frequently used signs in Kazakh-Russian Sign Language (K-RSL), which have similar manual components but differ in non-manual components (i.e. facial expressions, eyebrow height, mouth, and head orientation). We conducted a series of evaluations in order to investigate whether non-manual components would improve sign{'}s recognition accuracy. Among standard machine learning approaches, Logistic Regression produced the best results, 78.2{\%} of accuracy for dataset with 20 signs and 77.9{\%} of accuracy for dataset with 2 classes (statement vs question). Dataset can be downloaded from the following website: https://krslproject.github.io/krsl20/

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods