Search Results for author: Alireza Fallah

Found 14 papers, 3 papers with code

On Three-Layer Data Markets

no code implementations15 Feb 2024 Alireza Fallah, Michael I. Jordan, Ali Makhdoumi, Azarakhsh Malekian

We study a three-layer data market comprising users (data owners), platforms, and a data buyer.

The Limits of Price Discrimination Under Privacy Constraints

no code implementations13 Feb 2024 Alireza Fallah, Michael I. Jordan, Ali Makhdoumi, Azarakhsh Malekian

We consider a privacy mechanism that provides a degree of protection by probabilistically masking each market segment, and we establish that the resultant set of all consumer-producer utilities forms a convex polygon, characterized explicitly as a linear mapping of a certain high-dimensional convex polytope into $\mathbb{R}^2$.

Optimal and Differentially Private Data Acquisition: Central and Local Mechanisms

no code implementations10 Jan 2022 Alireza Fallah, Ali Makhdoumi, Azarakhsh Malekian, Asuman Ozdaglar

We consider a platform's problem of collecting data from privacy sensitive users to estimate an underlying parameter of interest.

Private Adaptive Gradient Methods for Convex Optimization

no code implementations25 Jun 2021 Hilal Asi, John Duchi, Alireza Fallah, Omid Javidbakht, Kunal Talwar

We study adaptive methods for differentially private convex optimization, proposing and analyzing differentially private variants of a Stochastic Gradient Descent (SGD) algorithm with adaptive stepsizes, as well as the AdaGrad algorithm.

A Wasserstein Minimax Framework for Mixed Linear Regression

1 code implementation14 Jun 2021 Theo Diamandis, Yonina C. Eldar, Alireza Fallah, Farzan Farnia, Asuman Ozdaglar

We propose an optimal transport-based framework for MLR problems, Wasserstein Mixed Linear Regression (WMLR), which minimizes the Wasserstein distance between the learned and target mixture regression models.

Federated Learning regression

Generalization of Model-Agnostic Meta-Learning Algorithms: Recurring and Unseen Tasks

no code implementations NeurIPS 2021 Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar

In this paper, we study the generalization properties of Model-Agnostic Meta-Learning (MAML) algorithms for supervised learning problems.

Generalization Bounds Meta-Learning

Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach

2 code implementations NeurIPS 2020 Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar

In this paper, we study a personalized variant of the federated learning in which our goal is to find an initial shared model that current or new users can easily adapt to their local dataset by performing one or a few steps of gradient descent with respect to their own data.

Meta-Learning Personalized Federated Learning

Personalized Federated Learning: A Meta-Learning Approach

no code implementations19 Feb 2020 Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar

In this paper, we study a personalized variant of the federated learning in which our goal is to find an initial shared model that current or new users can easily adapt to their local dataset by performing one or a few steps of gradient descent with respect to their own data.

Meta-Learning Personalized Federated Learning

An Optimal Multistage Stochastic Gradient Method for Minimax Problems

no code implementations13 Feb 2020 Alireza Fallah, Asuman Ozdaglar, Sarath Pattathil

Next, we propose a multistage variant of stochastic GDA (M-GDA) that runs in multiple stages with a particular learning rate decay schedule and converges to the exact solution of the minimax problem.

On the Convergence Theory of Debiased Model-Agnostic Meta-Reinforcement Learning

1 code implementation NeurIPS 2021 Alireza Fallah, Kristian Georgiev, Aryan Mokhtari, Asuman Ozdaglar

We consider Model-Agnostic Meta-Learning (MAML) methods for Reinforcement Learning (RL) problems, where the goal is to find a policy using data from several tasks represented by Markov Decision Processes (MDPs) that can be updated by one step of stochastic policy gradient for the realized MDP.

Meta-Learning Meta Reinforcement Learning +3

Robust Distributed Accelerated Stochastic Gradient Methods for Multi-Agent Networks

no code implementations19 Oct 2019 Alireza Fallah, Mert Gurbuzbalaban, Asuman Ozdaglar, Umut Simsekli, Lingjiong Zhu

When gradients do not contain noise, we also prove that distributed accelerated methods can \emph{achieve acceleration}, requiring $\mathcal{O}(\kappa \log(1/\varepsilon))$ gradient evaluations and $\mathcal{O}(\kappa \log(1/\varepsilon))$ communications to converge to the same fixed point with the non-accelerated variant where $\kappa$ is the condition number and $\varepsilon$ is the target accuracy.

Stochastic Optimization

On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms

no code implementations27 Aug 2019 Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar

We study the convergence of a class of gradient-based Model-Agnostic Meta-Learning (MAML) methods and characterize their overall complexity as well as their best achievable accuracy in terms of gradient norm for nonconvex loss functions.

Meta-Learning

A Universally Optimal Multistage Accelerated Stochastic Gradient Method

no code implementations NeurIPS 2019 Necdet Serhat Aybat, Alireza Fallah, Mert Gurbuzbalaban, Asuman Ozdaglar

We study the problem of minimizing a strongly convex, smooth function when we have noisy estimates of its gradient.

Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions

no code implementations27 May 2018 Necdet Serhat Aybat, Alireza Fallah, Mert Gurbuzbalaban, Asuman Ozdaglar

We study the trade-offs between convergence rate and robustness to gradient errors in designing a first-order algorithm.

Cannot find the paper you are looking for? You can Submit a new open access paper.