no code implementations • 4 Apr 2024 • Ben Adcock, Simone Brugiapaglia, Nick Dexter, Sebastian Moraga
For the latter, there is currently a significant gap between the approximation theory of DNNs and the practical performance of deep learning.
no code implementations • 6 Feb 2024 • Emanuele Zangrando, Piero Deidda, Simone Brugiapaglia, Nicola Guglielmi, Francesco Tudisco
Recent work in deep learning has shown strong empirical and theoretical evidence of an implicit low-rank bias: weight matrices in deep networks tend to be approximately low-rank and removing relatively small singular values during training or from available trained models may significantly reduce model size while maintaining or even improving model performance.
no code implementations • 1 Feb 2024 • Nicola Rares Franco, Simone Brugiapaglia
In recent years, deep learning has gained increasing popularity in the fields of Partial Differential Equations (PDEs) and Reduced Order Modeling (ROM), providing domain practitioners with new powerful data-driven techniques such as Physics-Informed Neural Networks (PINNs), Neural Operators, Deep Operator Networks (DeepONets) and Deep-Learning based ROMs (DL-ROMs).
no code implementations • 8 Oct 2023 • Aaron Berk, Simone Brugiapaglia, Yaniv Plan, Matthew Scott, Xia Sheng, Ozgur Yilmaz
We study generative compressed sensing when the measurement matrix is randomly subsampled from a unitary matrix (with the DFT as an important special case).
1 code implementation • 30 Jun 2023 • Giuseppe Alessio D'Inverno, Simone Brugiapaglia, Mirco Ravanelli
They are usually based on a message-passing mechanism and have gained increasing popularity for their intuitive formulation, which is closely linked to the Weisfeiler-Lehman (WL) test for graph isomorphism to which they have been proven equivalent in terms of expressive power.
no code implementations • 18 Aug 2022 • Ben Adcock, Simone Brugiapaglia
We show that there is a least-squares approximation based on $m$ Monte Carlo samples whose error decays algebraically fast in $m/\log(m)$, with a rate that is the same as that of the best $n$-term polynomial approximation.
no code implementations • 19 Jul 2022 • Aaron Berk, Simone Brugiapaglia, Babhru Joshi, Yaniv Plan, Matthew Scott, Özgür Yılmaz
In Bora et al. (2017), a mathematical framework was developed for compressed sensing guarantees in the setting where the measurement matrix is Gaussian and the signal structure is the range of a generative neural network (GNN).
no code implementations • 2 Jun 2022 • Weiqi Wang, Simone Brugiapaglia
We also present numerical experiments that illustrate the accuracy and stability of the method for the approximation of sparse and compressible solutions.
no code implementations • 25 Mar 2022 • Ben Adcock, Simone Brugiapaglia, Nick Dexter, Sebastian Moraga
On the one hand, there is a well-developed theory of best $s$-term polynomial approximation, which asserts exponential or algebraic rates of convergence for holomorphic functions.
no code implementations • 11 Dec 2020 • Ben Adcock, Simone Brugiapaglia, Nick Dexter, Sebastian Moraga
Such problems are challenging: 1) pointwise samples are expensive to acquire, 2) the function domain is high dimensional, and 3) the range lies in a Hilbert space.
1 code implementation • 9 May 2020 • Simone Brugiapaglia, Matthew Liu, Paul Tupper
Finally, we demonstrate our theory with computational experiments in which we explore the effect of different input encodings on the ability of algorithms to generalize to novel inputs.
no code implementations • 24 May 2019 • Ben Adcock, Simone Brugiapaglia, Matthew King-Roskamp
A signature result in compressed sensing is that Gaussian random sampling achieves stable and robust recovery of sparse vectors under optimal conditions on the number of measurements.
Image Reconstruction Information Theory Information Theory
1 code implementation • 17 Jul 2018 • Simone Brugiapaglia
The approach is based on a spectral Sturm-Liouville approximation of the solution and on the collocation of the PDE in strong form at random points, by taking advantage of the compressive sensing principle.
Numerical Analysis
1 code implementation • 11 Jun 2018 • Ben Adcock, Claire Boyer, Simone Brugiapaglia
We present improved sampling complexity bounds for stable and robust sparse recovery in compressed sensing.
Information Theory Information Theory