A Study of Multimodal Person Verification Using Audio-Visual-Thermal Data

In this paper, we study an approach to multimodal person verification using audio, visual, and thermal modalities. The combination of audio and visual modalities has already been shown to be effective for robust person verification. From this perspective, we investigate the impact of further increasing the number of modalities by adding thermal images. In particular, we implemented unimodal, bimodal, and trimodal verification systems using state-of-the-art deep learning architectures and compared their performance under clean and noisy conditions. We also compared two popular fusion approaches based on simple score averaging and the soft attention mechanism. The experiment conducted on the SpeakingFaces dataset demonstrates the superior performance of the trimodal verification system. Specifically, on the easy test set, the trimodal system outperforms the best unimodal and bimodal systems by over 50% and 18% relative equal error rates, respectively, under both the clean and noisy conditions. On the hard test set, the trimodal system outperforms the best unimodal and bimodal systems by over 40% and 13% relative equal error rates, respectively, under both the clean and noisy conditions. To enable reproducibility of the experiment and facilitate research into multimodal person verification, we made our code, pretrained models, and preprocessed dataset freely available in our GitHub repository.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods