Search Results for author: Holger Rauhut

Found 14 papers, 1 papers with code

Uncertainty quantification for learned ISTA

no code implementations14 Sep 2023 Frederik Hoppe, Claudio Mayrink Verdun, Felix Krahmer, Hannah Laus, Holger Rauhut

Model-based deep learning solutions to inverse problems have attracted increasing attention in recent years as they bridge state-of-the-art numerical performance with interpretability.

Uncertainty Quantification

Robust Implicit Regularization via Weight Normalization

no code implementations9 May 2023 Hung-Hsu Chou, Holger Rauhut, Rachel Ward

By analyzing key invariants of the gradient flow and using Lojasiewicz Theorem, we show that weight normalization also has an implicit bias towards sparse solutions in the diagonal linear model, but that in contrast to plain gradient flow, weight normalization enables a robust bias that persists even if the weights are initialized at practically large scale.

More is Less: Inducing Sparsity via Overparameterization

no code implementations21 Dec 2021 Hung-Hsu Chou, Johannes Maly, Holger Rauhut

In deep learning it is common to overparameterize neural networks, that is, to use more parameters than training samples.

Compressive Sensing

Generalization Error Bounds for Iterative Recovery Algorithms Unfolded as Neural Networks

no code implementations8 Dec 2021 Ekkehard Schnoor, Arash Behboodi, Holger Rauhut

Motivated by the learned iterative soft thresholding algorithm (LISTA), we introduce a general class of neural networks suitable for sparse reconstruction from few linear measurements.

Dictionary Learning Generalization Bounds

Spark Deficient Gabor Frames for Inverse Problems

no code implementations13 Oct 2021 Vasiliki Kouni, Holger Rauhut

In this paper, we apply star-Digital Gabor Transform in analysis Compressed Sensing and speech denoising.

Denoising Speech Denoising

ADMM-DAD net: a deep unfolding network for analysis compressed sensing

1 code implementation13 Oct 2021 Vasiliki Kouni, Georgios Paraskevopoulos, Holger Rauhut, George C. Alexandropoulos

In this paper, we propose a new deep unfolding neural network based on the ADMM algorithm for analysis Compressed Sensing.

Path classification by stochastic linear recurrent neural networks

no code implementations6 Aug 2021 Wiebke Bartolomaeus, Youness Boutaib, Sandra Nestler, Holger Rauhut

We investigate the functioning of a classifying biological neural network from the perspective of statistical learning theory, modelled, in a simplified setting, as a continuous-time stochastic recurrent neural network (RNN) with identity activation function.

Classification Learning Theory

Convergence of gradient descent for learning linear neural networks

no code implementations4 Aug 2021 Gabin Maxime Nguegnang, Holger Rauhut, Ulrich Terstiege

In the case of three or more layers we show that gradient descent converges to a global minimum on the manifold matrices of some fixed rank, where the rank cannot be determined a priori.

Gradient Descent for Deep Matrix Factorization: Dynamics and Implicit Bias towards Low Rank

no code implementations27 Nov 2020 Hung-Hsu Chou, Carsten Gieshoff, Johannes Maly, Holger Rauhut

This suggests that deep learning prefers trajectories whose complexity (measuredin terms of effective rank) is monotonically increasing, which we believe is a fundamental concept for thetheoretical understanding of deep learning.

Denoising

Overparameterization and generalization error: weighted trigonometric interpolation

no code implementations15 Jun 2020 Yuege Xie, Hung-Hsu Chou, Holger Rauhut, Rachel Ward

Motivated by surprisingly good generalization properties of learned deep neural networks in overparameterized scenarios and by the related double descent phenomenon, this paper analyzes the relation between smoothness and low generalization error in an overparameterized linear learning problem.

Learning deep linear neural networks: Riemannian gradient flows and convergence to global minimizers

no code implementations12 Oct 2019 Bubacarr Bah, Holger Rauhut, Ulrich Terstiege, Michael Westdickenberg

We study the convergence of gradient flows related to learning deep linear neural networks (where the activation function is the identity map) from data.

Cannot find the paper you are looking for? You can Submit a new open access paper.