Personalized breath based biometric authentication with wearable multimodality

29 Oct 2021  ·  Manh-Ha Bui, Viet-Anh Tran, Cuong Pham ·

Breath with nose sound features has been shown as a potential biometric in personal identification and verification. In this paper, we show that information that comes from other modalities captured by motion sensors on the chest in addition to audio features could further improve the performance. Our work is composed of three main contributions: hardware creation, dataset publication, and proposed multimodal models. To be more specific, we design new hardware which consists of an acoustic sensor to collect audio features from the nose, as well as an accelerometer and gyroscope to collect movement on the chest as a result of an individual's breathing. Using this hardware, we publish a collected dataset from a number of sessions from different volunteers, each session includes three common gestures: normal, deep, and strong breathing. Finally, we experiment with two multimodal models based on Convolutional Long Short Term Memory (CNN-LSTM) and Temporal Convolutional Networks (TCN) architectures. The results demonstrate the suitability of our new hardware for both verification and identification tasks.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here