no code implementations • 28 Jan 2021 • Mazen Ali, Anthony Nouy
To answer the latter: as a candidate model class we consider approximation classes of TNs and show that these are (quasi-)Banach spaces, that many types of classical smoothness spaces are continuously embedded into said approximation classes and that TN approximation classes are themselves not embedded in any classical smoothness space.
1 code implementation • 28 Aug 2020 • Mazen Ali, Stefan A. Funken, Anja Schmidt
The $L^2$-orthogonal projection $\Pi_h:L^2(\Omega)\rightarrow\mathbb{V}_h$ onto a finite element (FE) space $\mathbb{V}_h$ is called $H^1$-stable iff $\|\nabla\Pi_h u\|_{L^2(\Omega)}\leq C\|u\|_{H^1(\Omega)}$, for any $u\in H^1(\Omega)$ with a positive constant $C\neq C(h)$ independent of the mesh size $h>0$.
Numerical Analysis Numerical Analysis 65M50
no code implementations • 30 Jul 2020 • Mazen Ali, Anthony Nouy
We consider approximation rates of sparsely connected deep rectified linear unit (ReLU) and rectified power unit (RePU) neural networks for functions in Besov spaces $B^\alpha_{q}(L^p)$ in arbitrary dimension $d$, on general domains.
no code implementations • 30 Jun 2020 • Mazen Ali, Anthony Nouy
The results of this work are both an analysis of the approximation spaces of TNs and a study of the expressivity of a particular type of neural networks (NN) -- namely feed-forward sum-product networks with sparse architecture.
no code implementations • 30 Jun 2020 • Mazen Ali, Anthony Nouy
The considered approximation tool combines a tensorization of functions in $L^p([0, 1))$, which allows to identify a univariate function with a multivariate function (or tensor), and the use of tree tensor networks (the tensor train format) for exploiting low-rank structures of multivariate functions.