Towards Interpretable Sleep Stage Classification Using Cross-Modal Transformers

Accurate sleep stage classification is significant for sleep health assessment. In recent years, several machine-learning based sleep staging algorithms have been developed , and in particular, deep-learning based algorithms have achieved performance on par with human annotation. Despite improved performance, a limitation of most deep-learning based algorithms is their black-box behavior, which have limited their use in clinical settings. Here, we propose a cross-modal transformer, which is a transformer-based method for sleep stage classification. The proposed cross-modal transformer consists of a novel cross-modal transformer encoder architecture along with a multi-scale one-dimensional convolutional neural network for automatic representation learning. Our method outperforms the state-of-the-art methods and eliminates the black-box behavior of deep-learning models by utilizing the interpretability aspect of the attention modules. Furthermore, our method provides considerable reductions in the number of parameters and training time compared to the state-of-the-art methods. Our code is available at https://github.com/Jathurshan0330/Cross-Modal-Transformer. A demo of our work can be found at https://bit.ly/Cross_modal_transformer_demo.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Automatic Sleep Stage Classification Sleep-EDF Sequence Cross-Modal Transformer-15 Number of parameters (M) 4.05 # 2
Accuracy 84.3 # 2
Cohen’s Kappa score 0.785 # 1
Automatic Sleep Stage Classification Sleep-EDF Epoch Cross-Modal Transformer Number of parameters (M) 0.32 # 1
Accuracy 80.8 # 4
Cohen’s Kappa score 0.736 # 2

Methods


No methods listed for this paper. Add relevant methods here