no code implementations • 25 Jan 2024 • Giuseppe Alessio D'Inverno, Jonathan Dong
Reservoir Computing (RC) has become popular in recent years due to its fast and efficient computational capabilities.
no code implementations • 19 Jul 2023 • Yan Liu, Jonathan Dong, Thanh-an Pham, Francois Marelli, Michael Unser
Then, we introduce a calibration algorithm that recovers the unknown system parameters fed into the final 3D iterative reconstruction algorithm for a distortion-free volumetric image.
1 code implementation • 7 Jun 2022 • Jonathan Dong, Erik Börve, Mushegh Rafayelyan, Michael Unser
Reservoir Computing is a class of Recurrent Neural Networks with internal weights fixed at random.
no code implementations • 18 Mar 2022 • Pakshal Bohra, Thanh-an Pham, Jonathan Dong, Michael Unser
In this work, we present a Bayesian reconstruction framework for nonlinear imaging models where we specify the prior knowledge on the image through a deep generative model.
1 code implementation • NeurIPS 2020 • Jonathan Dong, Ruben Ohana, Mushegh Rafayelyan, Florent Krzakala
Reservoir Computing is a class of simple yet efficient Recurrent Neural Networks where internal weights are fixed at random and only a linear output layer is trained.
1 code implementation • 22 Oct 2019 • Ruben Ohana, Jonas Wacker, Jonathan Dong, Sébastien Marmin, Florent Krzakala, Maurizio Filippone, Laurent Daudet
Approximating kernel functions with random features (RFs)has been a successful application of random projections for nonparametric estimation.
1 code implementation • 30 Oct 2018 • Jonathan Dong, Florent Krzakala, Sylvain Gigan
We introduce a generalized version of phase retrieval called multiplexed phase retrieval.
no code implementations • 15 Sep 2016 • Jonathan Dong, Sylvain Gigan, Florent Krzakala, Gilles Wainrib
As a proof of concept, binary networks have been successfully trained to predict the chaotic Mackey-Glass time series.
no code implementations • 10 Nov 2015 • Li-Hao Yeh, Jonathan Dong, Jingshan Zhong, Lei Tian, Michael Chen, Gongguo Tang, Mahdi Soltanolkotabi, Laura Waller
Both noise (e. g. Poisson noise) and model mis-match errors are shown to scale with intensity.