Search Results for author: Abolfazl Hashemi

Found 28 papers, 7 papers with code

AdaGossip: Adaptive Consensus Step-size for Decentralized Deep Learning with Communication Compression

no code implementations9 Apr 2024 Sai Aparna Aketi, Abolfazl Hashemi, Kaushik Roy

Decentralized learning is crucial in supporting on-device learning over large distributed datasets, eliminating the need for a central server.

Asynchronous Federated Reinforcement Learning with Policy Gradient Updates: Algorithm Design and Convergence Analysis

no code implementations9 Apr 2024 Guangchen Lan, Dong-Jun Han, Abolfazl Hashemi, Vaneet Aggarwal, Christopher G. Brinton

Moreover, compared to synchronous FedPG, AFedPG improves the time complexity from $\mathcal{O}(\frac{t_{\max}}{N})$ to $\mathcal{O}(\frac{1}{\sum_{i=1}^{N} \frac{1}{t_{i}}})$, where $t_{i}$ denotes the time consumption in each iteration at the agent $i$, and $t_{\max}$ is the largest one.

Localized Distributional Robustness in Submodular Multi-Task Subset Selection

no code implementations4 Apr 2024 Ege C. Kaya, Abolfazl Hashemi

This approach bridges the existing gap in the optimization of performance-robustness trade-offs in multi-task subset selection.

FedNMUT -- Federated Noisy Model Update Tracking Convergence Analysis

no code implementations20 Mar 2024 Vishnu Pandi Chellapandi, Antesh Upadhyay, Abolfazl Hashemi, Stanislaw H. Żak

A novel Decentralized Noisy Model Update Tracking Federated Learning algorithm (FedNMUT) is proposed that is tailored to function efficiently in the presence of noisy communication channels that reflect imperfect information exchange.

Federated Learning

Unveiling Privacy, Memorization, and Input Curvature Links

no code implementations28 Feb 2024 Deepak Ravikumar, Efstathia Soufleri, Abolfazl Hashemi, Kaushik Roy

Second, we present a novel insight showing that input loss curvature is upper-bounded by the differential privacy parameter.

Memorization

Improved Convergence Analysis and SNR Control Strategies for Federated Learning in the Presence of Noise

no code implementations14 Jul 2023 Antesh Upadhyay, Abolfazl Hashemi

We propose an improved convergence analysis technique that characterizes the distributed learning paradigm of federated learning (FL) with imperfect/noisy uplink and downlink communications.

Federated Learning

Communication-Efficient Zeroth-Order Distributed Online Optimization: Algorithm, Theory, and Applications

1 code implementation9 Jun 2023 Ege C. Kaya, M. Berk Sahin, Abolfazl Hashemi

This paper focuses on a multi-agent zeroth-order online optimization problem in a federated learning setting for target tracking.

Federated Learning

Global Update Tracking: A Decentralized Learning Algorithm for Heterogeneous Data

1 code implementation NeurIPS 2023 Sai Aparna Aketi, Abolfazl Hashemi, Kaushik Roy

Decentralized learning enables the training of deep learning models over large distributed datasets generated at different locations, without the need for a central server.

On the Convergence of Decentralized Federated Learning Under Imperfect Information Sharing

1 code implementation19 Mar 2023 Vishnu Pandi Chellapandi, Antesh Upadhyay, Abolfazl Hashemi, Stanislaw H /. Zak

The first algorithm, Federated Noisy Decentralized Learning (FedNDL1), comes from the literature, where the noise is added to their parameters to simulate the scenario of the presence of noisy communication channels.

Distributed Optimization Federated Learning

No-Regret Learning in Dynamic Stackelberg Games

no code implementations10 Feb 2022 Niklas Lauffer, Mahsa Ghasemi, Abolfazl Hashemi, Yagiz Savas, Ufuk Topcu

The regret of the proposed learning algorithm is independent of the size of the state space and polynomial in the rest of the parameters of the game.

Scheduling

On the Benefits of Inducing Local Lipschitzness for Robust Generative Adversarial Imitation Learning

no code implementations30 Jun 2021 Farzan Memarian, Abolfazl Hashemi, Scott Niekum, Ufuk Topcu

We explore methodologies to improve the robustness of generative adversarial imitation learning (GAIL) algorithms to observation noise.

Imitation Learning

Robust Training in High Dimensions via Block Coordinate Geometric Median Descent

2 code implementations16 Jun 2021 Anish Acharya, Abolfazl Hashemi, Prateek Jain, Sujay Sanghavi, Inderjit S. Dhillon, Ufuk Topcu

Geometric median (\textsc{Gm}) is a classical method in statistics for achieving a robust estimation of the uncorrupted data; under gross corruption, it achieves the optimal breakdown point of 0. 5.

Ranked #19 on Image Classification on MNIST (Accuracy metric)

Image Classification Vocal Bursts Intensity Prediction

On the Convergence of Differentially Private Federated Learning on Non-Lipschitz Objectives, and with Normalized Client Updates

no code implementations13 Jun 2021 Rudrajit Das, Abolfazl Hashemi, Sujay Sanghavi, Inderjit S. Dhillon

The primary reason for this is that the clipping operation (i. e., projection onto an $\ell_2$ ball of a fixed radius called the clipping threshold) for bounding the sensitivity of the average update to each client's update introduces bias depending on the clipping threshold and the number of local steps in FL, and analyzing this is not easy.

Benchmarking Federated Learning +1

Generalization Bounds for Sparse Random Feature Expansions

2 code implementations4 Mar 2021 Abolfazl Hashemi, Hayden Schaeffer, Robert Shi, Ufuk Topcu, Giang Tran, Rachel Ward

In particular, we provide generalization bounds for functions in a certain class (that is dense in a reproducing kernel Hilbert space) depending on the number of samples and the distribution of features.

BIG-bench Machine Learning Compressive Sensing +1

Physical-Layer Security via Distributed Beamforming in the Presence of Adversaries with Unknown Locations

no code implementations28 Feb 2021 Yagiz Savas, Abolfazl Hashemi, Abraham P. Vinod, Brian M. Sadler, Ufuk Topcu

In such a setting, we develop a periodic transmission strategy, i. e., a sequence of joint beamforming gain and artificial noise pairs, that prevents the adversaries from decreasing their uncertainty on the information sequence by eavesdropping on the transmission.

Communication-Efficient Variance-Reduced Decentralized Stochastic Optimization over Time-Varying Directed Graphs

no code implementations23 Jan 2021 Yiyue Chen, Abolfazl Hashemi, Haris Vikalo

To our knowledge, this is the first decentralized optimization framework for time-varying directed networks that achieves such a convergence rate and applies to settings requiring sparsified communication.

Stochastic Optimization

Faster Non-Convex Federated Learning via Global and Local Momentum

no code implementations7 Dec 2020 Rudrajit Das, Anish Acharya, Abolfazl Hashemi, Sujay Sanghavi, Inderjit S. Dhillon, Ufuk Topcu

We propose \texttt{FedGLOMO}, a novel federated learning (FL) algorithm with an iteration complexity of $\mathcal{O}(\epsilon^{-1. 5})$ to converge to an $\epsilon$-stationary point (i. e., $\mathbb{E}[\|\nabla f(\bm{x})\|^2] \leq \epsilon$) for smooth non-convex functions -- under arbitrary client heterogeneity and compressed communication -- compared to the $\mathcal{O}(\epsilon^{-2})$ complexity of most prior works.

Federated Learning

On the Benefits of Multiple Gossip Steps in Communication-Constrained Decentralized Optimization

1 code implementation20 Nov 2020 Abolfazl Hashemi, Anish Acharya, Rudrajit Das, Haris Vikalo, Sujay Sanghavi, Inderjit Dhillon

In this paper, we show that, in such compressed decentralized optimization settings, there are benefits to having {\em multiple} gossip steps between subsequent gradient iterations, even when the cost of doing so is appropriately accounted for e. g. by means of reducing the precision of compressed information.

Decentralized Optimization On Time-Varying Directed Graphs Under Communication Constraints

no code implementations27 May 2020 Yiyue Chen, Abolfazl Hashemi, Haris Vikalo

We propose a communication-efficient algorithm for decentralized convex optimization that rely on sparsification of local updates exchanged between neighboring agents in the network.

Identifying Sparse Low-Dimensional Structures in Markov Chains: A Nonnegative Matrix Factorization Approach

no code implementations27 Sep 2019 Mahsa Ghasemi, Abolfazl Hashemi, Haris Vikalo, Ufuk Topcu

We formulate the task of representation learning as that of mapping the state space of the model to a low-dimensional state space, called the kernel space.

Representation Learning

Performance-Complexity Tradeoffs in Greedy Weak Submodular Maximization with Random Sampling

no code implementations22 Jul 2019 Abolfazl Hashemi, Haris Vikalo, Gustavo de Veciana

The latter implies that uniform sampling strategies with a fixed sampling size achieve a non-trivial approximation factor; however, we show that with overwhelming probability, these methods fail to find the optimal subset.

Dimensionality Reduction feature selection +1

Evolutionary Self-Expressive Models for Subspace Clustering

no code implementations29 Oct 2018 Abolfazl Hashemi, Haris Vikalo

The problem of organizing data that evolves over time into clusters is encountered in a number of practical settings.

Clustering

Towards Accelerated Greedy Sampling and Reconstruction of Bandlimited Graph Signals

no code implementations19 Jul 2018 Abolfazl Hashemi, Rasoul Shafipour, Haris Vikalo, Gonzalo Mateos

Then, we consider the Bayesian scenario where we formulate the sampling task as the problem of maximizing a monotone weak submodular function, and propose a randomized-greedy algorithm to find a sub-optimal subset of informative nodes.

Accelerated Sparse Subspace Clustering

no code implementations31 Oct 2017 Abolfazl Hashemi, Haris Vikalo

State-of-the-art algorithms for sparse subspace clustering perform spectral clustering on a similarity matrix typically obtained by representing each data point as a sparse combination of other points using either basis pursuit (BP) or orthogonal matching pursuit (OMP).

Clustering

Sampling and Reconstruction of Graph Signals via Weak Submodularity and Semidefinite Relaxation

no code implementations31 Oct 2017 Abolfazl Hashemi, Rasoul Shafipour, Haris Vikalo, Gonzalo Mateos

We study the problem of sampling a bandlimited graph signal in the presence of noise, where the objective is to select a node subset of prescribed cardinality that minimizes the signal reconstruction mean squared error (MSE).

Sparse recovery via Orthogonal Least-Squares under presence of Noise

no code implementations8 Aug 2016 Abolfazl Hashemi, Haris Vikalo

We consider the Orthogonal Least-Squares (OLS) algorithm for the recovery of a $m$-dimensional $k$-sparse signal from a low number of noisy linear measurements.

Sampling Requirements and Accelerated Schemes for Sparse Linear Regression with Orthogonal Least-Squares

1 code implementation8 Aug 2016 Abolfazl Hashemi, Haris Vikalo

We analyze the performance of AOLS and establish lower bounds on the probability of exact recovery for both noiseless and noisy random linear measurements.

Clustering regression

Sparse Linear Regression via Generalized Orthogonal Least-Squares

no code implementations22 Feb 2016 Abolfazl Hashemi, Haris Vikalo

Sparse linear regression, which entails finding a sparse solution to an underdetermined system of linear equations, can formally be expressed as an $l_0$-constrained least-squares problem.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.