no code implementations • 18 Aug 2023 • Cheik Traoré, Vassilis Apidopoulos, Saverio Salzo, Silvia Villa
Stochastic proximal point algorithms have been studied as an alternative to stochastic gradient algorithms since they are more stable with respect to the choice of the stepsize but a proper variance reduced version is missing.
no code implementations • 17 Apr 2023 • Sofiane Tanji, Andrea Della Vecchia, François Glineur, Silvia Villa
Kernel methods provide a powerful framework for non parametric learning.
no code implementations • 24 Dec 2022 • Vassilis Apidopoulos, Tomaso Poggio, Lorenzo Rosasco, Silvia Villa
In this paper, we focus on iterative regularization in the context of classification.
no code implementations • 10 Jun 2022 • Marco Rando, Cesare Molinari, Silvia Villa, Lorenzo Rosasco
For smooth convex functions we prove almost sure convergence of the iterates and a convergence rate on the function values of the form $O(d/l k^{-c})$ for every $c<1/2$, which is arbitrarily close to the one of Stochastic Gradient Descent (SGD) in terms of number of iterations.
no code implementations • 1 Feb 2022 • Cesare Molinari, Mathurin Massias, Lorenzo Rosasco, Silvia Villa
Our approach is based on a primal-dual algorithm of which we analyze convergence and stability properties, even in the case where the original problem is unfeasible.
no code implementations • 16 Jun 2021 • Marco Rando, Luigi Carratino, Silvia Villa, Lorenzo Rosasco
In this paper, we introduce Ada-BKB (Adaptive Budgeted Kernelized Bandit), a no-regret Gaussian process optimization algorithm for functions on continuous domains, that provably runs in $O(T^2 d_\text{eff}^2)$, where $d_\text{eff}$ is the effective dimension of the explored space, and which is typically much smaller than $T$.
1 code implementation • 17 Jun 2020 • Cesare Molinari, Mathurin Massias, Lorenzo Rosasco, Silvia Villa
We study iterative regularization for linear models, when the bias is convex but not necessarily strongly convex.
no code implementations • 18 Jul 2017 • Simon Matet, Lorenzo Rosasco, Silvia Villa, Bang Long Vu
We consider the problem of designing efficient regularization algorithms when regularization is encoded by a (strongly) convex functional.
no code implementations • 28 Mar 2017 • Guillaume Garrigos, Lorenzo Rosasco, Silvia Villa
We provide a comprehensive study of the convergence of the forward-backward algorithm under suitable geometric conditions, such as conditioning or {\L}ojasiewicz properties.
no code implementations • CVPR 2015 • Carlo Ciliberto, Lorenzo Rosasco, Silvia Villa
Multi-task learning is a natural approach for computer vision applications that require the simultaneous solution of several distinct but related problems, e. g. object detection, classification, tracking of multiple agents, or denoising, to name a few.
no code implementations • NeurIPS 2015 • Lorenzo Rosasco, Silvia Villa
Within a statistical learning setting, we propose and study an iterative regularization algorithm for least squares defined by an incremental gradient method.
no code implementations • 24 Mar 2013 • Silvia Villa, Lorenzo Rosasco, Tomaso Poggio
We consider the fundamental question of learnability of a hypotheses class in the supervised learning setting and in the general learning setting introduced by Vladimir Vapnik.
no code implementations • NeurIPS 2010 • Sofia Mosci, Silvia Villa, Alessandro Verri, Lorenzo Rosasco
We deal with the problem of variable selection when variables must be selected group-wise, with possibly overlapping groups defined a priori.