1 code implementation • 11 Sep 2023 • Zebang Shen, Jiayuan Ye, Anmin Kang, Hamed Hassani, Reza Shokri
Repeated parameter sharing in federated learning causes significant information leakage about private data, thus defeating its main purpose: data privacy.
no code implementations • 23 Apr 2023 • Zebang Shen, Hui Qian, Tongzhou Mu, Chao Zhang
Nowadays, algorithms with fast convergence, small memory footprints, and low per-iteration complexity are particularly favorable for artificial intelligence applications.
1 code implementation • 5 Jun 2022 • Isidoros Tziotis, Zebang Shen, Ramtin Pedarsani, Hamed Hassani, Aryan Mokhtari
Federated Learning is an emerging learning paradigm that allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
1 code implementation • 2 Jun 2022 • Zebang Shen, Zhenfu Wang, Satyen Kale, Alejandro Ribeiro, Amin Karbasi, Hamed Hassani
In this paper, we exploit this concept to design a potential function of the hypothesis velocity fields, and prove that, if such a function diminishes to zero during the training procedure, the trajectory of the densities generated by the hypothesis velocity fields converges to the solution of the FPE in the Wasserstein-2 sense.
no code implementations • ICLR 2022 • Zebang Shen, Juan Cervino, Hamed Hassani, Alejandro Ribeiro
Federated Learning (FL) has emerged as the tool of choice for training deep models over heterogeneous and decentralized datasets.
1 code implementation • 29 May 2021 • Jiahao Xie, Chao Zhang, Zebang Shen, Weijie Liu, Hui Qian
We establish theoretical guarantees of CDMA under different choices of hyperparameters and conduct experiments on AUC maximization, robust adversarial network training, and GAN training tasks.
no code implementations • 11 Mar 2021 • Zebang Shen, Hamed Hassani, Satyen Kale, Amin Karbasi
First, in the semi-heterogeneous setting, when the marginal distributions of the feature vectors on client machines are identical, we develop the federated functional gradient boosting (FFGB) method that provably converges to the global minimum.
no code implementations • 2 Dec 2020 • Weijie Liu, Chao Zhang, Jiahao Xie, Zebang Shen, Hui Qian, Nenggan Zheng
Graph matching finds the correspondence of nodes across two graphs and is a basic task in graph-based machine learning.
no code implementations • NeurIPS 2020 • Zebang Shen, Zhenfu Wang, Alejandro Ribeiro, Hamed Hassani
In this regard, we propose a novel Sinkhorn Natural Gradient (SiNG) algorithm which acts as a steepest descent method on the probability space endowed with the Sinkhorn divergence.
no code implementations • NeurIPS 2020 • Zebang Shen, Zhenfu Wang, Alejandro Ribeiro, Hamed Hassani
In this paper, we consider the problem of computing the barycenter of a set of probability distributions under the Sinkhorn divergence.
no code implementations • 23 Jun 2020 • Mohammad Fereydounian, Zebang Shen, Aryan Mokhtari, Amin Karbasi, Hamed Hassani
More precisely, by assuming that Reliable-FW has access to a (stochastic) gradient oracle of the objective function and a noisy feasibility oracle of the safety polytope, it finds an $\epsilon$-approximate first-order stationary point with the optimal ${\mathcal{O}}({1}/{\epsilon^2})$ gradient oracle complexity (resp.
no code implementations • NeurIPS 2019 • Amin Karbasi, Hamed Hassani, Aryan Mokhtari, Zebang Shen
Concretely, for a monotone and continuous DR-submodular function, \SCGPP achieves a tight $[(1-1/e)\OPT -\epsilon]$ solution while using $O(1/\epsilon^2)$ stochastic gradients and $O(1/\epsilon)$ calls to the linear optimization oracle.
no code implementations • 31 Oct 2019 • Weijie Liu, Aryan Mokhtari, Asuman Ozdaglar, Sarath Pattathil, Zebang Shen, Nenggan Zheng
In this paper, we focus on solving a class of constrained non-convex non-concave saddle point problems in a decentralized manner by a group of nodes in a network.
no code implementations • 21 Oct 2019 • Chao Zhang, Jiahao Xie, Zebang Shen, Peilin Zhao, Tengfei Zhou, Hui Qian
In this paper, we explore a general Aggregated Gradient Langevin Dynamics framework (AGLD) for the Markov Chain Monte Carlo (MCMC) sampling.
no code implementations • 21 Oct 2019 • Jiahao Xie, Zebang Shen, Chao Zhang, Boyu Wang, Hui Qian
This paper focuses on projection-free methods for solving smooth Online Convex Optimization (OCO) problems.
no code implementations • 10 Oct 2019 • Mingrui Zhang, Zebang Shen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi
One of the beauties of the projected gradient descent method lies in its rather simple mechanism and yet stable behavior with inexact, stochastic gradients, which has led to its wide-spread use in many machine learning applications.
no code implementations • ICLR 2020 • Zebang Shen, Pan Zhou, Cong Fang, Alejandro Ribeiro
We target the problem of finding a local minimum in non-convex finite-sum minimization.
no code implementations • 19 Feb 2019 • Hamed Hassani, Amin Karbasi, Aryan Mokhtari, Zebang Shen
It is known that this rate is optimal in terms of stochastic gradient evaluations.
no code implementations • ICML 2018 • Zebang Shen, Aryan Mokhtari, Tengfei Zhou, Peilin Zhao, Hui Qian
Recently, the decentralized optimization problem is attracting growing attention.
no code implementations • 13 Nov 2016 • Zebang Shen, Hui Qian, Chao Zhang, Tengfei Zhou
Algorithms with fast convergence, small number of data access, and low per-iteration complexity are particularly favorable in the big data era, due to the demand for obtaining \emph{highly accurate solutions} to problems with \emph{a large number of samples} in \emph{ultra-high} dimensional space.
no code implementations • 12 Nov 2016 • Tengfei Zhou, Hui Qian, Zebang Shen, Congfu Xu
By restricting the iterate on a nonlinear manifold, the recently proposed Riemannian optimization methods prove to be both efficient and effective in low rank tensor completion problems.
no code implementations • 22 Jul 2013 • Zhihua Zhang, Shibo Zhao, Zebang Shen, Shuchang Zhou
In this paper we propose and study a family of sparsity-inducing penalty functions.