Search Results for author: Timo Klock

Found 10 papers, 3 papers with code

Gradient is All You Need?

2 code implementations16 Jun 2023 Konstantin Riedl, Timo Klock, Carina Geldhauser, Massimo Fornasier

The fundamental value of such link between CBO and SGD lies in the fact that CBO is provably globally convergent to global minimizers for ample classes of nonsmooth and nonconvex objective functions, hence, on the one side, offering a novel explanation for the success of stochastic relaxations of gradient descent.

Finite Sample Identification of Wide Shallow Neural Networks with Biases

no code implementations8 Nov 2022 Massimo Fornasier, Timo Klock, Marco Mondelli, Michael Rauchensteiner

Artificial neural networks are functions depending on a finite number of parameters typically encoded as weights and biases.

Semi-Supervised Manifold Learning with Complexity Decoupled Chart Autoencoders

no code implementations22 Aug 2022 Stefan C. Schonsheck, Scott Mahan, Timo Klock, Alexander Cloninger, Rongjie Lai

Our numerical experiments on synthetic and real-world data verify that the proposed model can effectively manage data with multi-class nearby but disjoint manifolds of different classes, overlapping manifolds, and manifolds with non-trivial topology.

Representation Learning

Landscape analysis of an improved power method for tensor decomposition

no code implementations NeurIPS 2021 Joe Kileel, Timo Klock, João M. Pereira

In this work, we consider the optimization formulation for symmetric tensor decomposition recently introduced in the Subspace Power Method (SPM) of Kileel and Pereira.

Tensor Decomposition

Stable Recovery of Entangled Weights: Towards Robust Identification of Deep Neural Networks from Minimal Samples

no code implementations18 Jan 2021 Christian Fiedler, Massimo Fornasier, Timo Klock, Michael Rauchensteiner

In this paper we approach the problem of unique and stable identifiability of generic deep artificial neural networks with pyramidal shape and smooth activation functions from a finite number of input-output samples.

A deep network construction that adapts to intrinsic dimensionality beyond the domain

no code implementations6 Aug 2020 Alexander Cloninger, Timo Klock

We study the approximation of two-layer compositions $f(x) = g(\phi(x))$ via deep networks with ReLU activation, where $\phi$ is a geometrically intuitive, dimensionality reducing feature map.

Estimating covariance and precision matrices along subspaces

no code implementations26 Sep 2019 Zeljko Kereta, Timo Klock

We also show that estimation of precision matrices is almost independent of the condition number of the covariance matrix.

Dimensionality Reduction

Robust and Resource Efficient Identification of Two Hidden Layer Neural Networks

no code implementations30 Jun 2019 Massimo Fornasier, Timo Klock, Michael Rauchensteiner

Gathering several approximate Hessians allows reliably to approximate the matrix subspace $\mathcal W$ spanned by symmetric tensors $a_1 \otimes a_1 ,\dots, a_{m_0}\otimes a_{m_0}$ formed by weights of the first layer together with the entangled symmetric tensors $v_1 \otimes v_1 ,\dots, v_{m_1}\otimes v_{m_1}$, formed by suitable combinations of the weights of the first and second layer as $v_\ell=A G_0 b_\ell/\|A G_0 b_\ell\|_2$, $\ell \in [m_1]$, for a diagonal matrix $G_0$ depending on the activation functions of the first layer.

Vocal Bursts Valence Prediction

Nonlinear generalization of the monotone single index model

1 code implementation24 Feb 2019 Zeljko Kereta, Timo Klock, Valeriya Naumova

This paper deals with a nonlinear generalization of this framework to allow for a regressor that uses multiple index vectors, adapting to local changes in the responses.

regression

Adaptive multi-penalty regularization based on a generalized Lasso path

1 code implementation11 Oct 2017 Markus Grasmair, Timo Klock, Valeriya Naumova

Another advantage of our algorithm is that it provides an overview on the solution stability over the whole range of parameters.

Model Selection

Cannot find the paper you are looking for? You can Submit a new open access paper.