Structure-Preserving Transformers for Sequences of SPD Matrices

14 Sep 2023  ·  Mathieu Seraphim, Alexis Lechervy, Florian Yger, Luc Brun, Olivier Etard ·

In recent years, Transformer-based auto-attention mechanisms have been successfully applied to the analysis of a variety of context-reliant data types, from texts to images and beyond, including data from non-Euclidean geometries. In this paper, we present such a mechanism, designed to classify sequences of Symmetric Positive Definite matrices while preserving their Riemannian geometry throughout the analysis. We apply our method to automatic sleep staging on timeseries of EEG-derived covariance matrices from a standard dataset, obtaining high levels of stage-wise performance.

PDF Abstract

Results from the Paper


 Ranked #1 on Sleep Stage Detection on MASS SS3 (Macro-averaged Accuracy metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Sleep Stage Detection MASS SS3 SPDTransNet Macro-F1 0.8124 # 3
Macro-averaged Accuracy 84.40% # 1

Methods


No methods listed for this paper. Add relevant methods here