Search Results for author: Juliette Millet

Found 7 papers, 2 papers with code

Toward a realistic model of speech processing in the brain with self-supervised learning

no code implementations3 Jun 2022 Juliette Millet, Charlotte Caucheteux, Pierre Orhan, Yves Boubenec, Alexandre Gramfort, Ewan Dunbar, Christophe Pallier, Jean-Remi King

These elements, resulting from the largest neuroimaging benchmark to date, show how self-supervised learning can account for a rich organization of speech processing in the brain, and thus delineate a path to identify the laws of language acquisition which shape the human brain.

Language Acquisition Self-Supervised Learning

Do self-supervised speech models develop human-like perception biases?

no code implementations ACL 2022 Juliette Millet, Ewan Dunbar

We show that the CPC model shows a small native language effect, but that wav2vec 2. 0 and HuBERT seem to develop a universal speech perception space which is not language specific.

Inductive biases, pretraining and fine-tuning jointly account for brain responses to speech

no code implementations25 Feb 2021 Juliette Millet, Jean-Remi King

Third, learning to process phonetically-related speech inputs (i. e., Dutch vs English) leads deep nets to reach higher levels of brain-similarity than learning to process phonetically-distant speech inputs (i. e. Dutch vs Bengali).

Scene Classification

Perceptimatic: A human speech perception benchmark for unsupervised subword modelling

1 code implementation12 Oct 2020 Juliette Millet, Ewan Dunbar

In this paper, we present a data set and methods to compare speech processing models and human behaviour on a phone discrimination task.

The Perceptimatic English Benchmark for Speech Perception Models

no code implementations7 May 2020 Juliette Millet, Ewan Dunbar

We show that DeepSpeech, a standard English speech recognizer, is more specialized on English phoneme discrimination than English listeners, and is poorly correlated with their behaviour, even though it yields a low error on the decision task given to humans.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Learning to detect dysarthria from raw speech

3 code implementations27 Nov 2018 Juliette Millet, Neil Zeghidour

We extend this approach to paralinguistic classification and propose a neural network that can learn a filterbank, a normalization factor and a compression power from the raw speech, jointly with the rest of the architecture.

General Classification Sentence +2

Cannot find the paper you are looking for? You can Submit a new open access paper.