Touchless Typing using Head Movement-based Gestures

24 Jan 2020  ·  Shivam Rustagi, Aakash Garg, Pranay Raj Anand, Rajesh Kumar, Yaman Kumar, Rajiv Ratn Shah ·

Physical contact-based typing interfaces are not suitable for people with upper limb disabilities such as Quadriplegia. This paper, thus, proposes a touch-less typing interface that makes use of an on-screen QWERTY keyboard and a front-facing smartphone camera mounted on a stand. The keys of the keyboard are grouped into nine color-coded clusters. Users pointed to the letters that they wanted to type just by moving their head. The head movements of the users are recorded by the camera. The recorded gestures are then translated into a cluster sequence. The translation module is implemented using CNN-RNN, Conv3D, and a modified GRU based model that uses pre-trained embedding rich in head pose features. The performances of these models were evaluated under four different scenarios on a dataset of 2234 video sequences collected from 22 users. The modified GRU-based model outperforms the standard CNN-RNN and Conv3D models for three of the four scenarios. The results are encouraging and suggest promising directions for future research.

PDF Abstract

Categories


Human-Computer Interaction I.2.7

Datasets


  Add Datasets introduced or used in this paper