no code implementations • 17 Jan 2024 • Pieter De Clercq, Corentin Puffay, Jill Kries, Hugo Van hamme, Maaike Vandermosten, Tom Francart, Jonas Vanthornhout
We modeled electroencephalography (EEG) responses to acoustic, segmentation, and linguistic speech representations of a story using convolutional neural networks trained on a large sample of healthy participants, serving as a model for intact neural tracking of speech.
no code implementations • 3 Feb 2023 • Corentin Puffay, Bernd Accou, Lies Bollens, Mohammad Jalilpour Monesi, Jonas Vanthornhout, Hugo Van hamme, Tom Francart
Linear models are presently used to relate the EEG recording to the corresponding speech signal.
no code implementations • 5 Jul 2022 • Corentin Puffay, Jana Van Canneyt, Jonas Vanthornhout, Hugo Van hamme, Tom Francart
To investigate how speech is processed in the brain, we can model the relation between features of a natural speech signal and the corresponding recorded electroencephalogram (EEG).