Role-specific Language Models for Processing Recorded Neuropsychological Exams

NAACL 2018  ·  Tuka Al Hanai, Rhoda Au, James Glass ·

Neuropsychological examinations are an important screening tool for the presence of cognitive conditions (e.g. Alzheimer{'}s, Parkinson{'}s Disease), and require a trained tester to conduct the exam through spoken interactions with the subject. While audio is relatively easy to record, it remains a challenge to automatically diarize (who spoke when?), decode (what did they say?), and assess a subject{'}s cognitive health. This paper demonstrates a method to determine the cognitive health (impaired or not) of 92 subjects, from audio that was diarized using an automatic speech recognition system trained on TED talks and on the structured language used by testers and subjects. Using leave-one-out cross validation and logistic regression modeling we show that even with noisily decoded data (81{\%} WER) we can still perform accurate enough diarization (0.02{\%} confusion rate) to determine the cognitive state of a subject (0.76 AUC).

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods