1 code implementation • 31 Jan 2024 • Simon Geirnaert, Yuanyuan Yao, Tom Francart, Alexander Bertrand
In this context, generalized canonical correlation analysis (GCCA) is often used as a group analysis technique, which allows the extraction of correlated signal components from the neural activity of multiple subjects attending to the same stimulus.
no code implementations • 17 Jan 2024 • Pieter De Clercq, Corentin Puffay, Jill Kries, Hugo Van hamme, Maaike Vandermosten, Tom Francart, Jonas Vanthornhout
We modeled electroencephalography (EEG) responses to acoustic, segmentation, and linguistic speech representations of a story using convolutional neural networks trained on a large sample of healthy participants, serving as a model for intact neural tracking of speech.
no code implementations • 17 Oct 2023 • Nicolas Heintz, Tom Francart, Alexander Bertrand
Linear Discriminant Analysis (LDA) is one of the oldest and most popular linear methods for supervised classification problems.
no code implementations • 31 Jul 2023 • Mohammad Jalilpour Monesi, Jonas Vanthornhout, Hugo Van hamme, Tom Francart
Our results show that vowel-consonant onsets outperform onsets of any phone in both tasks, which suggests that neural tracking of the vowel vs. consonant exists in the EEG to some degree.
no code implementations • 14 Mar 2023 • Pieter De Clercq, Jill Kries, Ramtin Mehraram, Jonas Vanthornhout, Tom Francart, Maaike Vandermosten
In this study, we aimed to test the potential of the neural envelope tracking technique for detecting language impairments in individuals with aphasia (IWA).
no code implementations • 3 Feb 2023 • Corentin Puffay, Bernd Accou, Lies Bollens, Mohammad Jalilpour Monesi, Jonas Vanthornhout, Hugo Van hamme, Tom Francart
Linear models are presently used to relate the EEG recording to the corresponding speech signal.
1 code implementation • 9 Jan 2023 • Bernd Accou, Hugo Van hamme, Tom Francart
We propose a novel paradigm for the self-supervised enhancement of stimulus-related brain response data.
1 code implementation • 24 Oct 2022 • Simon Geirnaert, Tom Francart, Alexander Bertrand
We show the superiority of the proposed stimulus-informed GCCA method based on the inter-subject correlation between electroencephalography responses of a group of subjects listening to the same speech stimulus, especially for lower amounts of data or smaller groups of subjects.
no code implementations • 5 Jul 2022 • Corentin Puffay, Jana Van Canneyt, Jonas Vanthornhout, Hugo Van hamme, Tom Francart
To investigate how speech is processed in the brain, we can model the relation between features of a natural speech signal and the corresponding recorded electroencephalogram (EEG).
no code implementations • 1 Jul 2022 • Lies Bollens, Tom Francart, Hugo Van hamme
The electroencephalogram (EEG) is a powerful method to understand how the brain processes speech.
2 code implementations • 17 Jun 2021 • Mohammad Jalilpour Monesi, Bernd Accou, Tom Francart, Hugo Van hamme
Decoding the speech signal that a person is listening to from the human brain via electroencephalography (EEG) can help us understand how our auditory system works.
no code implementations • 14 May 2021 • Bernd Accou, Mohammad Jalilpour Monesi, Hugo Van hamme, Tom Francart
The accuracy of the model's match/mismatch predictions can be used as a proxy for speech intelligibility without subject-specific (re)training.
no code implementations • 11 Aug 2020 • Simon Geirnaert, Servaas Vandecappelle, Emina Alickovic, Alain de Cheveigné, Edmund Lalor, Bernd T. Meyer, Sina Miran, Tom Francart, Alexander Bertrand
People suffering from hearing impairment often have difficulties participating in conversations in so-called `cocktail party' scenarios with multiple people talking simultaneously.
no code implementations • 18 Feb 2016 • Simon Van Eyndhoven, Tom Francart, Alexander Bertrand
OBJECTIVE: We aim to extract and denoise the attended speaker in a noisy, two-speaker acoustic scenario, relying on microphone array recordings from a binaural hearing aid, which are complemented with electroencephalography (EEG) recordings to infer the speaker of interest.