no code implementations • 27 Oct 2023 • Mohammadreza Doostmohammadian, Alireza Aghasi, Maria Vrakopoulou, Hamid R. Rabiee, Usman A. Khan, Themistoklis Charalambou
This paper proposes two nonlinear dynamics to solve constrained distributed optimization problem for resource allocation over a multi-agent network.
no code implementations • 15 Feb 2023 • Mohammadreza Doostmohammadian, Mohammad Pirani, Usman A. Khan
The tracking part is based on linear time-difference-of-arrival (TDOA) measurement proposed in our previous works.
no code implementations • 30 Aug 2022 • Mohammadreza Doostmohammadian, Usman A. Khan, Alireza Aghasi, Themistoklis Charalambous
This paper considers distributed resource allocation and sum-preserving constrained optimization over lossy networks, where the links are unreliable and subject to packet drops.
no code implementations • 11 Feb 2022 • Muhammad I. Qureshi, Usman A. Khan
In this paper, we propose GT-GDA, a distributed optimization method to solve saddle point problems of the form: $\min_{\mathbf{x}} \max_{\mathbf{y}} \{F(\mathbf{x},\mathbf{y}) :=G(\mathbf{x}) + \langle \mathbf{y}, \overline{P} \mathbf{x} \rangle - H(\mathbf{y})\}$, where the functions $G(\cdot)$, $H(\cdot)$, and the the coupling matrix $\overline{P}$ are distributed over a strongly connected network of nodes.
no code implementations • 7 Feb 2022 • Muhammad I. Qureshi, Ran Xin, Soummya Kar, Usman A. Khan
This paper proposes AB-SAGA, a first-order distributed stochastic optimization method to minimize a finite-sum of smooth and strongly convex functions distributed over an arbitrary directed graph.
no code implementations • 20 Sep 2021 • Mohammadreza Doostmohammadian, Houman Zarrabi, Hamid R. Rabiee, Usman A. Khan, Themistoklis Charalambous
First, for performance analysis in the attack-free case, we show that the proposed distributed estimation is unbiased with bounded mean-square deviation in steady-state.
no code implementations • 22 May 2021 • Mohammadreza Doostmohammadian, Themistoklis Charalambous, Miadreza Shafie-khah, Nader Meskin, Usman A. Khan
This paper considers distributed estimation of linear systems when the state observations are corrupted with Gaussian noise of unbounded support and under possible random adversarial attacks.
no code implementations • 22 May 2021 • Mohammadreza Doostmohammadian, Themistoklis Charalambous, Miadreza Shafie-khah, Hamid R. Rabiee, Usman A. Khan
Observability and estimation are closely tied to the system structure, which can be visualized as a system graph--a graph that captures the inter-dependencies within the state variables.
no code implementations • 1 Apr 2021 • Mohammadreza Doostmohammadian, Usman A. Khan, Mohammad Pirani, Themistoklis Charalambous
Classical distributed estimation scenarios typically assume timely and reliable exchanges of information over the sensor network.
no code implementations • 1 Apr 2021 • Mohammadreza Doostmohammadian, Alireza Aghasi, Themistoklis Charalambous, Usman A. Khan
In this paper, we consider the binary classification problem via distributed Support-Vector-Machines (SVM), where the idea is to train a network of agents, with limited share of data, to cooperatively learn the SVM classifier for the global database.
no code implementations • 12 Feb 2021 • Ran Xin, Usman A. Khan, Soummya Kar
This paper considers decentralized stochastic optimization over a network of $n$ nodes, where each node possesses a smooth non-convex local cost function and the goal of the networked nodes is to find an $\epsilon$-accurate first-order stationary point of the sum of the local costs.
no code implementations • 15 Dec 2020 • Mohammadreza Doostmohammadian, Alireza Aghasi, Mohammad Pirani, Ehsan Nekouei, Usman A. Khan, Themistoklis Charalambous
The idea is to optimally allocate the resources among the group of agents by minimizing the overall cost function subject to fixed sum of resources.
no code implementations • 7 Nov 2020 • Ran Xin, Usman A. Khan, Soummya Kar
For general smooth non-convex problems, we show the almost sure and mean-squared convergence of GT-SAGA to a first-order stationary point and further describe regimes of practical significance where it outperforms the existing approaches and achieves a network topology-independent iteration complexity respectively.
no code implementations • 12 Sep 2020 • Ran Xin, Shi Pu, Angelia Nedić, Usman A. Khan
Decentralized optimization to minimize a finite sum of functions over a network of nodes has been a significant focus within control and signal processing research due to its natural relevance to optimal control and signal estimation problems.
no code implementations • 17 Aug 2020 • Ran Xin, Usman A. Khan, Soummya Kar
We show that GT-SARAH, with appropriate algorithmic parameters, finds an $\epsilon$-accurate first-order stationary point with $O\big(\max\big\{N^{\frac{1}{2}}, n(1-\lambda)^{-2}, n^{\frac{2}{3}}m^{\frac{1}{3}}(1-\lambda)^{-1}\big\}L\epsilon^{-2}\big)$ gradient complexity, where ${(1-\lambda)\in(0, 1]}$ is the spectral gap of the network weight matrix and $L$ is the smoothness parameter of the cost functions.
1 code implementation • 13 Aug 2020 • Muhammad I. Qureshi, Ran Xin, Soummya Kar, Usman A. Khan
In this paper, we propose Push-SAGA, a decentralized stochastic first-order method for finite-sum minimization over a directed network of nodes.
no code implementations • 10 Aug 2020 • Ran Xin, Usman A. Khan, Soummya Kar
In this paper, we study decentralized online stochastic non-convex optimization over a network of nodes.
2 code implementations • 15 May 2020 • Muhammad I. Qureshi, Ran Xin, Soummya Kar, Usman A. Khan
In this report, we study decentralized stochastic optimization to minimize a sum of smooth and strongly convex cost functions when the functions are distributed over a directed network of nodes.
no code implementations • 13 Feb 2020 • Ran Xin, Soummya Kar, Usman A. Khan
Decentralized methods to solve finite-sum minimization problems are important in many signal processing and machine learning tasks where the data is distributed over a network of nodes and raw data sharing is not permitted due to privacy and/or resource constraints.
no code implementations • 8 Oct 2019 • Ran Xin, Usman A. Khan, Soummya Kar
Decentralized stochastic optimization has recently benefited from gradient tracking methods \cite{DSGT_Pu, DSGT_Xin} providing efficient solutions for large-scale empirical risk minimization problems.
no code implementations • 25 Sep 2019 • Ran Xin, Usman A. Khan, Soummya Kar
In this paper, we study decentralized empirical risk minimization problems, where the goal to minimize a finite-sum of smooth and strongly-convex functions available over a network of nodes.
Optimization and Control
no code implementations • 23 Jul 2019 • Ran Xin, Soummya Kar, Usman A. Khan
Decentralized solutions to finite-sum minimization are of significant importance in many signal processing, control, and machine learning applications.
no code implementations • 18 Mar 2019 • Ran Xin, Anit Kumar Sahu, Usman A. Khan, Soummya Kar
In this paper, we study distributed stochastic optimization to minimize a sum of smooth and strongly-convex local cost functions over a network of agents, communicating over a strongly-connected graph.
no code implementations • 21 Jan 2019 • Ran Xin, Dusan Jakovetic, Usman A. Khan
In this letter, we introduce a distributed Nesterov method, termed as $\mathcal{ABN}$, that does not require doubly-stochastic weight matrices.