no code implementations • 15 Feb 2024 • Alireza Fallah, Michael I. Jordan, Ali Makhdoumi, Azarakhsh Malekian
We study a three-layer data market comprising users (data owners), platforms, and a data buyer.
no code implementations • 13 Feb 2024 • Alireza Fallah, Michael I. Jordan, Ali Makhdoumi, Azarakhsh Malekian
We consider a privacy mechanism that provides a degree of protection by probabilistically masking each market segment, and we establish that the resultant set of all consumer-producer utilities forms a convex polygon, characterized explicitly as a linear mapping of a certain high-dimensional convex polytope into $\mathbb{R}^2$.
no code implementations • 10 Jan 2022 • Alireza Fallah, Ali Makhdoumi, Azarakhsh Malekian, Asuman Ozdaglar
We consider a platform's problem of collecting data from privacy sensitive users to estimate an underlying parameter of interest.
no code implementations • 25 Jun 2021 • Hilal Asi, John Duchi, Alireza Fallah, Omid Javidbakht, Kunal Talwar
We study adaptive methods for differentially private convex optimization, proposing and analyzing differentially private variants of a Stochastic Gradient Descent (SGD) algorithm with adaptive stepsizes, as well as the AdaGrad algorithm.
1 code implementation • 14 Jun 2021 • Theo Diamandis, Yonina C. Eldar, Alireza Fallah, Farzan Farnia, Asuman Ozdaglar
We propose an optimal transport-based framework for MLR problems, Wasserstein Mixed Linear Regression (WMLR), which minimizes the Wasserstein distance between the learned and target mixture regression models.
no code implementations • NeurIPS 2021 • Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar
In this paper, we study the generalization properties of Model-Agnostic Meta-Learning (MAML) algorithms for supervised learning problems.
2 code implementations • NeurIPS 2020 • Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar
In this paper, we study a personalized variant of the federated learning in which our goal is to find an initial shared model that current or new users can easily adapt to their local dataset by performing one or a few steps of gradient descent with respect to their own data.
no code implementations • 19 Feb 2020 • Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar
In this paper, we study a personalized variant of the federated learning in which our goal is to find an initial shared model that current or new users can easily adapt to their local dataset by performing one or a few steps of gradient descent with respect to their own data.
no code implementations • 13 Feb 2020 • Alireza Fallah, Asuman Ozdaglar, Sarath Pattathil
Next, we propose a multistage variant of stochastic GDA (M-GDA) that runs in multiple stages with a particular learning rate decay schedule and converges to the exact solution of the minimax problem.
1 code implementation • NeurIPS 2021 • Alireza Fallah, Kristian Georgiev, Aryan Mokhtari, Asuman Ozdaglar
We consider Model-Agnostic Meta-Learning (MAML) methods for Reinforcement Learning (RL) problems, where the goal is to find a policy using data from several tasks represented by Markov Decision Processes (MDPs) that can be updated by one step of stochastic policy gradient for the realized MDP.
no code implementations • 19 Oct 2019 • Alireza Fallah, Mert Gurbuzbalaban, Asuman Ozdaglar, Umut Simsekli, Lingjiong Zhu
When gradients do not contain noise, we also prove that distributed accelerated methods can \emph{achieve acceleration}, requiring $\mathcal{O}(\kappa \log(1/\varepsilon))$ gradient evaluations and $\mathcal{O}(\kappa \log(1/\varepsilon))$ communications to converge to the same fixed point with the non-accelerated variant where $\kappa$ is the condition number and $\varepsilon$ is the target accuracy.
no code implementations • 27 Aug 2019 • Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar
We study the convergence of a class of gradient-based Model-Agnostic Meta-Learning (MAML) methods and characterize their overall complexity as well as their best achievable accuracy in terms of gradient norm for nonconvex loss functions.
no code implementations • NeurIPS 2019 • Necdet Serhat Aybat, Alireza Fallah, Mert Gurbuzbalaban, Asuman Ozdaglar
We study the problem of minimizing a strongly convex, smooth function when we have noisy estimates of its gradient.
no code implementations • 27 May 2018 • Necdet Serhat Aybat, Alireza Fallah, Mert Gurbuzbalaban, Asuman Ozdaglar
We study the trade-offs between convergence rate and robustness to gradient errors in designing a first-order algorithm.