no code implementations • 21 Dec 2023 • Anthony Nouy, Bertrand Michel
We first provide a generalized version of volume-rescaled sampling yielding quasi-optimality results in expectation with a number of samples $n = O(m\log(m))$, that means that the expected $L^2$ error is bounded by a constant times the best approximation error in $L^2$.
no code implementations • 28 Jan 2021 • Mazen Ali, Anthony Nouy
To answer the latter: as a candidate model class we consider approximation classes of TNs and show that these are (quasi-)Banach spaces, that many types of classical smoothness spaces are continuously embedded into said approximation classes and that TN approximation classes are themselves not embedded in any classical smoothness space.
no code implementations • 30 Jul 2020 • Marie Billaud-Friess, Arthur Macherey, Anthony Nouy, Clémentine Prieur
This paper considers the problem of maximizing an expectation function over a finite set, or finite-arm bandit problem.
no code implementations • 30 Jul 2020 • Mazen Ali, Anthony Nouy
We consider approximation rates of sparsely connected deep rectified linear unit (ReLU) and rectified power unit (RePU) neural networks for functions in Besov spaces $B^\alpha_{q}(L^p)$ in arbitrary dimension $d$, on general domains.
no code implementations • 2 Jul 2020 • Bertrand Michel, Anthony Nouy
We propose and analyze a complexity-based model selection method for tree tensor networks in an empirical risk minimization framework and we analyze its performance over a wide range of smoothness classes.
no code implementations • 30 Jun 2020 • Mazen Ali, Anthony Nouy
The considered approximation tool combines a tensorization of functions in $L^p([0, 1))$, which allows to identify a univariate function with a multivariate function (or tensor), and the use of tree tensor networks (the tensor train format) for exploiting low-rank structures of multivariate functions.
no code implementations • 30 Jun 2020 • Mazen Ali, Anthony Nouy
The results of this work are both an analysis of the approximation spaces of TNs and a study of the expressivity of a particular type of neural networks (NN) -- namely feed-forward sum-product networks with sparse architecture.
no code implementations • 17 Dec 2019 • Erwan Grelier, Anthony Nouy, Régis Lebrun
These algorithms exploit the multilinear parametrization of the formats to recast the nonlinear minimization problem into a sequence of empirical risk minimization problems with linear models.
no code implementations • 11 Nov 2018 • Erwan Grelier, Anthony Nouy, Mathilde Chevreuil
For a given tree, the selection of the tuple of tree-based ranks that minimize the risk is a combinatorial problem.
no code implementations • 30 Apr 2013 • Mathilde Chevreuil, Régis Lebrun, Anthony Nouy, Prashant Rai
In this paper, we propose a low-rank approximation method based on discrete least-squares for the approximation of a multivariate function from random, noisy-free observations.