Search Results for author: Thomas Moreau

Found 34 papers, 18 papers with code

The largest EEG-based BCI reproducibility study for open science: the MOABB benchmark

1 code implementation Journal of Neural Engineering 2024 Sylvain Chevallier, Igor Carrara, Bruno Aristimunha, Pierre Guetschel, Sara Sedlar, Bruna Lopes, Sebastien Velut, Salim Khazem, Thomas Moreau

The significance of this study lies in its contribution to establishing a rigorous and transparent benchmark for BCI research, offering insights into optimal methodologies and highlighting the importance of reproducibility in driving advancements within the field.

EEG Motor Imagery +5

S-JEPA: towards seamless cross-dataset transfer through dynamic spatial attention

no code implementations18 Mar 2024 Pierre Guetschel, Thomas Moreau, Michael Tangermann

Motivated by the challenge of seamless cross-dataset transfer in EEG signal processing, this article presents an exploratory study on the use of Joint Embedding Predictive Architectures (JEPAs).

Brain Decoding EEG +5

Equivariant plug-and-play image reconstruction

no code implementations4 Dec 2023 Matthieu Terris, Thomas Moreau, Nelly Pustelnik, Julian Tachella

Plug-and-play algorithms constitute a popular framework for solving inverse imaging problems that rely on the implicit definition of an image prior via a denoiser.

Denoising Image Reconstruction

Meta-Prior: Meta learning for Adaptive Inverse Problem Solvers

no code implementations30 Nov 2023 Matthieu Terris, Thomas Moreau

They are typically trained for a specific task, with a supervised loss to learn a mapping from the observations to the image to recover.

Meta-Learning

PAVI: Plate-Amortized Variational Inference

no code implementations30 Aug 2023 Louis Rouillard, Alexandre Le Bris, Thomas Moreau, Demian Wassermann

Given observed data and a probabilistic generative model, Bayesian inference searches for the distribution of the model's parameters that could have yielded the data.

Bayesian Inference Variational Inference

Test like you Train in Implicit Deep Learning

no code implementations24 May 2023 Zaccharie Ramzi, Pierre Ablin, Gabriel Peyré, Thomas Moreau

Implicit deep learning has recently gained popularity with applications ranging from meta-learning to Deep Equilibrium Networks (DEQs).

Meta-Learning

Sliced-Wasserstein on Symmetric Positive Definite Matrices for M/EEG Signals

2 code implementations10 Mar 2023 Clément Bonet, Benoît Malézieux, Alain Rakotomamonjy, Lucas Drumetz, Thomas Moreau, Matthieu Kowalski, Nicolas Courty

When dealing with electro or magnetoencephalography records, many supervised prediction tasks are solved by working with covariance matrices to summarize the signals.

Brain Computer Interface Computational Efficiency +4

A Lower Bound and a Near-Optimal Algorithm for Bilevel Empirical Risk Minimization

no code implementations17 Feb 2023 Mathieu Dagréou, Thomas Moreau, Samuel Vaiter, Pierre Ablin

Bilevel optimization problems, which are problems where two optimization problems are nested, have more and more applications in machine learning.

Bilevel Optimization

Data augmentation for learning predictive models on EEG: a systematic comparison

1 code implementation29 Jun 2022 Cédric Rommel, Joseph Paillard, Thomas Moreau, Alexandre Gramfort

Our experiments also show that there is no single best augmentation strategy, as the good augmentations differ on each task.

Data Augmentation EEG +1

PAVI: Plate-Amortized Variational Inference

no code implementations10 Jun 2022 Louis Rouillard, Thomas Moreau, Demian Wassermann

Given some observed data and a probabilistic generative model, Bayesian inference aims at obtaining the distribution of a model's latent parameters that could have yielded the data.

Bayesian Inference Variational Inference

Deep invariant networks with differentiable augmentation layers

1 code implementation4 Feb 2022 Cédric Rommel, Thomas Moreau, Alexandre Gramfort

Practitioners can typically enforce a desired invariance on the trained model through the choice of a network architecture, e. g. using convolutions for translations, or using data augmentation.

Bilevel Optimization Data Augmentation

A framework for bilevel optimization that enables stochastic and global variance reduction algorithms

1 code implementation31 Jan 2022 Mathieu Dagréou, Pierre Ablin, Samuel Vaiter, Thomas Moreau

However, computing the gradient of the value function involves solving a linear system, which makes it difficult to derive unbiased stochastic estimates.

Bilevel Optimization

CADDA: Class-wise Automatic Differentiable Data Augmentation for EEG Signals

no code implementations ICLR 2022 Cédric Rommel, Thomas Moreau, Joseph Paillard, Alexandre Gramfort

Data augmentation is a key element of deep learning pipelines, as it informs the network during training about transformations of the input data that keep the label unchanged.

Data Augmentation EEG

Understanding approximate and unrolled dictionary learning for pattern recovery

1 code implementation ICLR 2022 Benoît Malézieux, Thomas Moreau, Matthieu Kowalski

Dictionary learning consists of finding a sparse representation from noisy data and is a common way to encode data-driven prior knowledge on signals.

Dictionary Learning Rolling Shutter Correction

SHINE: SHaring the INverse Estimate from the forward pass for bi-level optimization and implicit models

2 code implementations ICLR 2022 Zaccharie Ramzi, Florian Mannel, Shaojie Bai, Jean-Luc Starck, Philippe Ciuciu, Thomas Moreau

In Deep Equilibrium Models (DEQs), the training is performed as a bi-level problem, and its computational complexity is partially driven by the iterative inversion of a huge Jacobian matrix.

Hyperparameter Optimization

HNPE: Leveraging Global Parameters for Neural Posterior Estimation

1 code implementation NeurIPS 2021 Pedro L. C. Rodrigues, Thomas Moreau, Gilles Louppe, Alexandre Gramfort

Inferring the parameters of a stochastic model based on experimental observations is central to the scientific method.

EEG

NeuMiss networks: differentiable programming for supervised learning with missing values.

no code implementations NeurIPS 2020 Marine Le Morvan, Julie Josses, Thomas Moreau, Erwan Scornet, Gael Varoquaux

We provide an upper bound on the Bayes risk of NeuMiss networks, and show that they have good predictive accuracy with both a number of parameters and a computational complexity independent of the number of missing data patterns.

Imputation

Learning to solve TV regularised problems with unrolled algorithms

1 code implementation NeurIPS 2020 Hamza Cherkaoui, Jeremias Sulam, Thomas Moreau

In this paper, we accelerate such iterative algorithms by unfolding proximal gradient descent solvers in order to learn their parameters for 1D TV regularized problems.

Extraction of Nystagmus Patterns from Eye-Tracker Data with Convolutional Sparse Coding

1 code implementation25 Nov 2020 Clément Lalanne, Maxence Rateaux, Laurent Oudre, Matthieu Robert, Thomas Moreau

The analysis of the Nystagmus waveforms from eye-tracking records is crucial for the clinicial interpretation of this pathological movement.

Dictionary Learning

Learning to solve TV regularized problems with unrolled algorithms

no code implementations19 Oct 2020 Hamza Cherkaoui, Jeremias Sulam, Thomas Moreau

In this paper, we accelerate such iterative algorithms by unfolding proximal gradient descent solvers in order to learn their parameters for 1D TV regularized problems.

NeuMiss networks: differentiable programming for supervised learning with missing values

no code implementations3 Jul 2020 Marine Le Morvan, Julie Josse, Thomas Moreau, Erwan Scornet, Gaël Varoquaux

We provide an upper bound on the Bayes risk of NeuMiss networks, and show that they have good predictive accuracy with both a number of parameters and a computational complexity independent of the number of missing data patterns.

Imputation

Super-efficiency of automatic differentiation for functions defined as a minimum

no code implementations ICML 2020 Pierre Ablin, Gabriel Peyré, Thomas Moreau

In most cases, the minimum has no closed-form, and an approximation is obtained via an iterative algorithm.

Learning step sizes for unfolded sparse coding

1 code implementation NeurIPS 2019 Pierre Ablin, Thomas Moreau, Mathurin Massias, Alexandre Gramfort

We demonstrate that for a large class of unfolded algorithms, if the algorithm converges to the solution of the Lasso, its last layers correspond to ISTA with learned step sizes.

Distributed Convolutional Dictionary Learning (DiCoDiLe): Pattern Discovery in Large Images and Signals

1 code implementation26 Jan 2019 Thomas Moreau, Alexandre Gramfort

This algorithm can be used to distribute the computation on a number of workers which scales linearly with the encoded signal's size.

Dictionary Learning Image Denoising

DICOD: Distributed Convolutional Coordinate Descent for Convolutional Sparse Coding

1 code implementation ICML 2018 Thomas Moreau, Laurent Oudre, Nicolas Vayatis

In this paper, we introduce DICOD, a convolutional sparse coding algorithm which builds shift invariant representations for long signals.

Multivariate Convolutional Sparse Coding for Electromagnetic Brain Signals

1 code implementation NeurIPS 2018 Tom Dupré La Tour, Thomas Moreau, Mainak Jas, Alexandre Gramfort

Frequency-specific patterns of neural activity are traditionally interpreted as sustained rhythmic oscillations, and related to cognitive mechanisms such as attention, high level visual processing or motor control.

EEG Time Series +1

Post-training for Deep Learning

no code implementations ICLR 2018 Thomas Moreau, Julien Audiffren

One of the main challenges of deep learning methods is the choice of an appropriate training strategy.

Unsupervised Pre-training

Understanding the Learned Iterative Soft Thresholding Algorithm with matrix factorization

1 code implementation2 Jun 2017 Thomas Moreau, Joan Bruna

Sparse coding is a core building block in many data analysis and machine learning pipelines.

DICOD: Distributed Convolutional Sparse Coding

no code implementations29 May 2017 Thomas Moreau, Laurent Oudre, Nicolas Vayatis

In this paper, we introduce DICOD, a convolutional sparse coding algorithm which builds shift invariant representations for long signals.

Post Training in Deep Learning with Last Kernel

1 code implementation14 Nov 2016 Thomas Moreau, Julien Audiffren

One of the main challenges of deep learning methods is the choice of an appropriate training strategy.

Unsupervised Pre-training

Understanding Trainable Sparse Coding via Matrix Factorization

1 code implementation1 Sep 2016 Thomas Moreau, Joan Bruna

Sparse coding is a core building block in many data analysis and machine learning pipelines.

Cannot find the paper you are looking for? You can Submit a new open access paper.