no code implementations • 1 Nov 2023 • Mathieu Even, Anastasia Koloskova, Laurent Massoulié
Decentralized and asynchronous communications are two popular techniques to speedup communication complexity of distributed machine learning, by respectively removing the dependency over a central orchestrator and the need for synchronization.
no code implementations • 10 Jul 2023 • Kevin Scaman, Mathieu Even, Laurent Massoulié
In this paper, we provide a novel framework for the analysis of generalization error of first-order optimization algorithms for statistical learning when the gradient can only be accessed through partial observations given by an oracle.
no code implementations • 27 Sep 2022 • Luca Ganassali, Laurent Massoulié, Guilhem Semerjian
In this paper we address the problem of testing whether two observed trees $(t, t')$ are sampled either independently or from a joint distribution under which they are correlated.
1 code implementation • 10 Jun 2022 • Edwige Cyffers, Mathieu Even, Aurélien Bellet, Laurent Massoulié
In this work, we introduce pairwise network differential privacy, a relaxation of LDP that captures the fact that the privacy leakage from a node $u$ to a node $v$ may depend on their relative position in the graph.
no code implementations • NeurIPS 2021 • Mathieu Even, Raphaël Berthier, Francis Bach, Nicolas Flammarion, Hadrien Hendrikx, Pierre Gaillard, Laurent Massoulié, Adrien Taylor
We introduce the ``continuized'' Nesterov acceleration, a close variant of Nesterov acceleration whose variables are indexed by a continuous time parameter.
1 code implementation • 15 Jul 2021 • Luca Ganassali, Laurent Massoulié, Marc Lelarge
We then conjecture that graph alignment is not feasible in polynomial time when the associated tree detection problem is impossible.
1 code implementation • 10 Jun 2021 • Mathieu Even, Raphaël Berthier, Francis Bach, Nicolas Flammarion, Pierre Gaillard, Hadrien Hendrikx, Laurent Massoulié, Adrien Taylor
We introduce the continuized Nesterov acceleration, a close variant of Nesterov acceleration whose variables are indexed by a continuous time parameter.
no code implementations • 4 Feb 2021 • Luca Ganassali, Laurent Massoulié, Marc Lelarge
Random graph alignment refers to recovering the underlying vertex correspondence between two random graphs with correlated edges.
no code implementations • 4 Feb 2021 • Mathieu Even, Laurent Massoulié
Dimension is an inherent bottleneck to some modern learning tasks, where optimization methods suffer from the size of the data.
no code implementations • 1 Jul 2020 • Georgina Hall, Laurent Massoulié
Our focus here is on partial recovery, i. e., we look for a one-to-one mapping which is correct on a fraction of the nodes of the graph rather than on all of them, and we assume that the two input graphs to the problem are correlated Erd\H{o}s-R\'enyi graphs of parameters $(n, q, s)$.
no code implementations • 28 Jan 2019 • Hadrien Hendrikx, Francis Bach, Laurent Massoulié
In this work, we study the problem of minimizing the sum of strongly convex functions split over a network of $n$ nodes.
Optimization and Control Distributed, Parallel, and Cluster Computing
no code implementations • 5 Oct 2018 • Hadrien Hendrikx, Francis Bach, Laurent Massoulié
Applying $ESDACD$ to quadratic local functions leads to an accelerated randomized gossip algorithm of rate $O( \sqrt{\theta_{\rm gossip}/n})$ where $\theta_{\rm gossip}$ is the rate of the standard randomized gossip.
no code implementations • 20 Jun 2018 • Clara Stegehuis, Laurent Massoulié
Whenever this function has multiple fixed points, the belief propagation algorithm may not perform optimally.
no code implementations • NeurIPS 2018 • Kevin Scaman, Francis Bach, Sébastien Bubeck, Yin Tat Lee, Laurent Massoulié
Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.
Optimization and Control
1 code implementation • ICML 2017 • Kevin Scaman, Francis Bach, Sébastien Bubeck, Yin Tat Lee, Laurent Massoulié
For centralized (i. e. master/slave) algorithms, we show that distributing Nesterov's accelerated gradient descent is optimal and achieves a precision $\varepsilon > 0$ in time $O(\sqrt{\kappa_g}(1+\Delta\tau)\ln(1/\varepsilon))$, where $\kappa_g$ is the condition number of the (global) function to optimize, $\Delta$ is the diameter of the network, and $\tau$ (resp.
no code implementations • 8 Sep 2016 • Lennart Gulikers, Marc Lelarge, Laurent Massoulié
As a result, a clustering positively-correlated with the true communities can be obtained based on the second eigenvector of $B$ in the regime where $\mu_2^2 > \rho.$ In a previous work we obtained that detection is impossible when $\mu_2^2 < \rho,$ meaning that there occurs a phase-transition in the sparse regime of the Degree-Corrected Stochastic Block Model.
no code implementations • 2 Nov 2015 • Lennart Gulikers, Marc Lelarge, Laurent Massoulié
We consider the Degree-Corrected Stochastic Block Model (DC-SBM): a random graph on $n$ nodes, having i. i. d.
no code implementations • 29 Jun 2015 • Lennart Gulikers, Marc Lelarge, Laurent Massoulié
In particular, it does not need to know the number of communities.
no code implementations • 16 Feb 2015 • Rui Wu, Jiaming Xu, R. Srikant, Laurent Massoulié, Marc Lelarge, Bruce Hajek
We propose an efficient algorithm that accurately estimates the individual preferences for almost all users, if there are $r \max \{m, n\}\log m \log^2 n$ pairwise comparisons per type, which is near optimal in sample complexity when $r$ only grows logarithmically with $m$ or $n$.
no code implementations • 11 Feb 2015 • Marc Lelarge, Laurent Massoulié, Jiaming Xu
The labeled stochastic block model is a random graph model representing networks with community structure and interactions of multiple types.
no code implementations • 26 Jun 2014 • Jiaming Xu, Laurent Massoulié, Marc Lelarge
The classical setting of community detection consists of networks exhibiting a clustered structure.
no code implementations • 13 Jul 2012 • Siddhartha Banerjee, Nidhi Hegde, Laurent Massoulié
In the information-rich regime, where each user rates at least a constant fraction of items, a spectral clustering approach is shown to achieve a sample-complexity lower bound derived from a simple information-theoretic argument based on Fano's inequality.
no code implementations • 15 Sep 2011 • Dan-Cristian Tomozei, Laurent Massoulié
We prove that without prior knowledge of the compositions of the classes, based solely on few random observed ratings (namely $O(N\log N)$ such ratings for $N$ users), we can predict user preference with high probability for unrated items by running a local vote among users with similar profile vectors.