no code implementations • 14 Sep 2023 • Frederik Hoppe, Claudio Mayrink Verdun, Felix Krahmer, Hannah Laus, Holger Rauhut
Model-based deep learning solutions to inverse problems have attracted increasing attention in recent years as they bridge state-of-the-art numerical performance with interpretability.
no code implementations • 9 May 2023 • Hung-Hsu Chou, Holger Rauhut, Rachel Ward
By analyzing key invariants of the gradient flow and using Lojasiewicz Theorem, we show that weight normalization also has an implicit bias towards sparse solutions in the diagonal linear model, but that in contrast to plain gradient flow, weight normalization enables a robust bias that persists even if the weights are initialized at practically large scale.
no code implementations • 21 Dec 2021 • Hung-Hsu Chou, Johannes Maly, Holger Rauhut
In deep learning it is common to overparameterize neural networks, that is, to use more parameters than training samples.
no code implementations • 8 Dec 2021 • Ekkehard Schnoor, Arash Behboodi, Holger Rauhut
Motivated by the learned iterative soft thresholding algorithm (LISTA), we introduce a general class of neural networks suitable for sparse reconstruction from few linear measurements.
no code implementations • 13 Oct 2021 • Vasiliki Kouni, Holger Rauhut
In this paper, we apply star-Digital Gabor Transform in analysis Compressed Sensing and speech denoising.
1 code implementation • 13 Oct 2021 • Vasiliki Kouni, Georgios Paraskevopoulos, Holger Rauhut, George C. Alexandropoulos
In this paper, we propose a new deep unfolding neural network based on the ADMM algorithm for analysis Compressed Sensing.
no code implementations • 6 Aug 2021 • Wiebke Bartolomaeus, Youness Boutaib, Sandra Nestler, Holger Rauhut
We investigate the functioning of a classifying biological neural network from the perspective of statistical learning theory, modelled, in a simplified setting, as a continuous-time stochastic recurrent neural network (RNN) with identity activation function.
no code implementations • 4 Aug 2021 • Gabin Maxime Nguegnang, Holger Rauhut, Ulrich Terstiege
In the case of three or more layers we show that gradient descent converges to a global minimum on the manifold matrices of some fixed rank, where the rank cannot be determined a priori.
no code implementations • NeurIPS 2020 • Sandra Nestler, Christian Keup, David Dahmen, Matthieu Gilson, Holger Rauhut, Moritz Helias
Cortical networks are strongly recurrent, and neurons have intrinsic temporal dynamics.
no code implementations • 27 Nov 2020 • Hung-Hsu Chou, Carsten Gieshoff, Johannes Maly, Holger Rauhut
This suggests that deep learning prefers trajectories whose complexity (measuredin terms of effective rank) is monotonically increasing, which we believe is a fundamental concept for thetheoretical understanding of deep learning.
no code implementations • 29 Oct 2020 • Arash Behboodi, Holger Rauhut, Ekkehard Schnoor
The neural networks in this class are obtained by unfolding iterations of ISTA and learning some of the weights.
no code implementations • 13 Oct 2020 • Sandra Nestler, Christian Keup, David Dahmen, Matthieu Gilson, Holger Rauhut, Moritz Helias
Cortical networks are strongly recurrent, and neurons have intrinsic temporal dynamics.
no code implementations • 15 Jun 2020 • Yuege Xie, Hung-Hsu Chou, Holger Rauhut, Rachel Ward
Motivated by surprisingly good generalization properties of learned deep neural networks in overparameterized scenarios and by the related double descent phenomenon, this paper analyzes the relation between smoothness and low generalization error in an overparameterized linear learning problem.
no code implementations • 12 Oct 2019 • Bubacarr Bah, Holger Rauhut, Ulrich Terstiege, Michael Westdickenberg
We study the convergence of gradient flows related to learning deep linear neural networks (where the activation function is the identity map) from data.