1 code implementation • 30 Oct 2023 • Massimo Fornasier, Pascal Heid, Giacomo Enrico Sodini
In this study, we delve into the challenging problem of the numerical approximation of Sobolev-smooth functions defined on probability spaces.
no code implementations • 5 Jul 2023 • Cristina Cipriani, Massimo Fornasier, Alessandro Scagliotti
The connection between Residual Neural Networks (ResNets) and continuous-time control systems (known as NeurODEs) has led to a mathematical analysis of neural networks which has provided interesting results of both theoretical and practical significance.
2 code implementations • 16 Jun 2023 • Konstantin Riedl, Timo Klock, Carina Geldhauser, Massimo Fornasier
The fundamental value of such link between CBO and SGD lies in the fact that CBO is provably globally convergent to global minimizers for ample classes of nonsmooth and nonconvex objective functions, hence, on the one side, offering a novel explanation for the success of stochastic relaxations of gradient descent.
no code implementations • 8 Nov 2022 • Massimo Fornasier, Timo Klock, Marco Mondelli, Michael Rauchensteiner
Artificial neural networks are functions depending on a finite number of parameters typically encoded as weights and biases.
no code implementations • 9 Mar 2021 • Mauro Bonafini, Massimo Fornasier, Bernhard Schmitzer
We prove convergence of minimizing solutions obtained from a finite number of observations to a mean field limit and the minimal value provides a quantitative error bound on the data-driven evolutions.
Optimization and Control
no code implementations • 18 Jan 2021 • Christian Fiedler, Massimo Fornasier, Timo Klock, Michael Rauchensteiner
In this paper we approach the problem of unique and stable identifiability of generic deep artificial neural networks with pyramidal shape and smooth activation functions from a finite number of input-output samples.
1 code implementation • 31 Jan 2020 • Massimo Fornasier, Hui Huang, Lorenzo Pareschi, Philippe Sünnen
To quantify the performances of the new approach, we show that the algorithm is able to perform essentially as good as ad hoc state of the art methods in challenging problems in signal processing and machine learning, namely the phase retrieval problem and the robust subspace detection.
no code implementations • 31 Jan 2020 • Massimo Fornasier, Hui Huang, Lorenzo Pareschi, Philippe Sünnen
We introduce a new stochastic differential model for global optimization of nonconvex functions on compact hypersurfaces.
no code implementations • 1 Nov 2019 • Stefano Almi, Massimo Fornasier, Richard Huber
In this paper we are concerned with the learnability of energies from data obtained by observing time evolutions of their critical points starting at random initial equilibria.
no code implementations • 30 Jun 2019 • Massimo Fornasier, Timo Klock, Michael Rauchensteiner
Gathering several approximate Hessians allows reliably to approximate the matrix subspace $\mathcal W$ spanned by symmetric tensors $a_1 \otimes a_1 ,\dots, a_{m_0}\otimes a_{m_0}$ formed by weights of the first layer together with the entangled symmetric tensors $v_1 \otimes v_1 ,\dots, v_{m_1}\otimes v_{m_1}$, formed by suitable combinations of the weights of the first and second layer as $v_\ell=A G_0 b_\ell/\|A G_0 b_\ell\|_2$, $\ell \in [m_1]$, for a diagonal matrix $G_0$ depending on the activation functions of the first layer.
1 code implementation • 10 May 2018 • Luigi Ambrosio, Massimo Fornasier, Marco Morandotti, Giuseppe Savaré
We introduce and study a mean-field model for a system of spatially distributed players interacting through an evolutionary game driven by a replicator dynamics.
Optimization and Control Dynamical Systems Functional Analysis 91A22, 37C10, 47J35, 58D25, 35Q91
no code implementations • 4 Apr 2018 • Massimo Fornasier, Jan Vybíral, Ingrid Daubechies
In the case of the shallowest feed-forward neural network, second order differentiation and tensors of order two (i. e., matrices) suffice as we prove in this paper.
no code implementations • 18 Jan 2018 • Massimo Fornasier, Johannes Maly, Valeriya Naumova
By adapting the concept of restricted isometry property from compressed sensing to our novel model class, we prove error bounds between global minimizers and ground truth, up to noise level, from a number of subgaussian measurements scaling as $R(s_1+s_2)$, up to log-factors in the dimension, and relative-to-diameter distortion.
Numerical Analysis Numerical Analysis
no code implementations • 18 Aug 2010 • Massimo Fornasier, Karin Schnass, Jan Vybiral
Under certain smoothness and variation assumptions on the function $g$, and an {\it arbitrary} choice of the matrix $A$, we present in this paper 1. a sampling choice of the points $\{x_i\}$ drawn at random for each function approximation; 2. algorithms (Algorithm 1 and Algorithm 2) for computing the approximating function, whose complexity is at most polynomial in the dimension $d$ and in the number $m$ of points.