Search Results for author: Guannan Qu

Found 23 papers, 5 papers with code

Efficient Reinforcement Learning for Global Decision Making in the Presence of Local Agents at Scale

no code implementations1 Mar 2024 Emile Anand, Guannan Qu

This work proposes the SUB-SAMPLE-Q algorithm where the global agent subsamples $k\leq n$ local agents to compute an optimal policy in time that is only exponential in $k$, providing an exponential speedup from standard methods that are exponential in $n$.

Decision Making

Efficient Reinforcement Learning for Routing Jobs in Heterogeneous Queueing Systems

no code implementations2 Feb 2024 Neharika Jali, Guannan Qu, Weina Wang, Gauri Joshi

Unlike homogeneous systems, a threshold policy, that routes jobs to the slow server(s) when the queue length exceeds a certain threshold, is known to be optimal for the one-fast-one-slow two-server system.

reinforcement-learning Reinforcement Learning (RL)

CoVO-MPC: Theoretical Analysis of Sampling-based MPC and Optimal Covariance Design

1 code implementation14 Jan 2024 Zeji Yi, Chaoyi Pan, Guanqi He, Guannan Qu, Guanya Shi

Sampling-based Model Predictive Control (MPC) has been a practical and effective approach in many domains, notably model-based reinforcement learning, thanks to its flexibility and parallelizability.

Model-based Reinforcement Learning Model Predictive Control

A Scalable Network-Aware Multi-Agent Reinforcement Learning Framework for Decentralized Inverter-based Voltage Control

no code implementations7 Dec 2023 Han Xu, Jialin Zheng, Guannan Qu

This paper addresses the challenges associated with decentralized voltage control in power grids due to an increase in distributed generations (DGs).

Multi-agent Reinforcement Learning

Compositional Neural Certificates for Networked Dynamical Systems

1 code implementation25 Mar 2023 Songyuan Zhang, Yumeng Xiu, Guannan Qu, Chuchu Fan

Specifically, we treat a large networked dynamical system as an interconnection of smaller subsystems and develop methods that can find each subsystem a decentralized controller and an ISS Lyapunov function; the latter can be collectively composed to prove the global stability of the system.

Global Convergence of Localized Policy Iteration in Networked Multi-Agent Reinforcement Learning

no code implementations30 Nov 2022 Yizhou Zhang, Guannan Qu, Pan Xu, Yiheng Lin, Zaiwei Chen, Adam Wierman

In particular, we show that, despite restricting each agent's attention to only its $\kappa$-hop neighborhood, the agents are able to learn a policy with an optimality gap that decays polynomially in $\kappa$.

Multi-agent Reinforcement Learning reinforcement-learning +1

Stability Constrained Reinforcement Learning for Decentralized Real-Time Voltage Control

1 code implementation16 Sep 2022 Jie Feng, Yuanyuan Shi, Guannan Qu, Steven H. Low, Anima Anandkumar, Adam Wierman

In this paper, we propose a stability-constrained reinforcement learning (RL) method for real-time voltage control, that guarantees system stability both during policy learning and deployment of the learned policy.

reinforcement-learning Reinforcement Learning (RL)

KCRL: Krasovskii-Constrained Reinforcement Learning with Guaranteed Stability in Nonlinear Dynamical Systems

no code implementations3 Jun 2022 Sahin Lale, Yuanyuan Shi, Guannan Qu, Kamyar Azizzadenesheli, Adam Wierman, Anima Anandkumar

However, current reinforcement learning (RL) methods lack stabilization guarantees, which limits their applicability for the control of safety-critical systems.

reinforcement-learning Reinforcement Learning (RL)

Near-Optimal Distributed Linear-Quadratic Regulator for Networked Systems

1 code implementation12 Apr 2022 Sungho Shin, Yiheng Lin, Guannan Qu, Adam Wierman, Mihai Anitescu

This paper studies the trade-off between the degree of decentralization and the performance of a distributed controller in a linear-quadratic control setting.

Stability Constrained Reinforcement Learning for Real-Time Voltage Control

no code implementations30 Sep 2021 Yuanyuan Shi, Guannan Qu, Steven Low, Anima Anandkumar, Adam Wierman

Deep reinforcement learning (RL) has been recognized as a promising tool to address the challenges in real-time control of power systems.

reinforcement-learning Reinforcement Learning (RL)

Robustness and Consistency in Linear Quadratic Control with Untrusted Predictions

no code implementations NeurIPS 2021 Tongxin Li, Ruixiao Yang, Guannan Qu, Guanya Shi, Chenkai Yu, Adam Wierman, Steven H. Low

Motivated by online learning methods, we design a self-tuning policy that adaptively learns the trust parameter $\lambda$ with a competitive ratio that depends on $\varepsilon$ and the variation of system perturbations and predictions.

Stable Online Control of Linear Time-Varying Systems

no code implementations29 Apr 2021 Guannan Qu, Yuanyuan Shi, Sahin Lale, Anima Anandkumar, Adam Wierman

In this work, we propose an efficient online control algorithm, COvariance Constrained Online Linear Quadratic (COCO-LQ) control, that guarantees input-to-state stability for a large class of LTV systems while also minimizing the control cost.

Reinforcement Learning for Selective Key Applications in Power Systems: Recent Advances and Future Challenges

no code implementations27 Jan 2021 Xin Chen, Guannan Qu, Yujie Tang, Steven Low, Na Li

With large-scale integration of renewable generation and distributed energy resources, modern power systems are confronted with new operational challenges, such as growing complexity, increasing uncertainty, and aggravating volatility.

Decision Making energy management +2

Learning Optimal Power Flow: Worst-Case Guarantees for Neural Networks

no code implementations19 Jun 2020 Andreas Venzke, Guannan Qu, Steven Low, Spyros Chatzivasileiadis

This paper introduces for the first time a framework to obtain provable worst-case guarantees for neural network performance, using learning for optimal power flow (OPF) problems as a guiding example.

Scalable Multi-Agent Reinforcement Learning for Networked Systems with Average Reward

no code implementations NeurIPS 2020 Guannan Qu, Yiheng Lin, Adam Wierman, Na Li

It has long been recognized that multi-agent reinforcement learning (MARL) faces significant scalability issues due to the fact that the size of the state and action spaces are exponentially large in the number of agents.

Multi-agent Reinforcement Learning reinforcement-learning +1

Scalable Reinforcement Learning of Localized Policies for Multi-Agent Networked Systems

no code implementations L4DC 2020 Guannan Qu, Adam Wierman, Na Li

We study reinforcement learning (RL) in a setting with a network of agents whose states and actions interact in a local manner where the objective is to find localized policies such that the (discounted) global reward is maximized.

reinforcement-learning Reinforcement Learning (RL)

Finite-Time Analysis of Asynchronous Stochastic Approximation and $Q$-Learning

no code implementations1 Feb 2020 Guannan Qu, Adam Wierman

We consider a general asynchronous Stochastic Approximation (SA) scheme featuring a weighted infinity-norm contractive operator, and prove a bound on its finite-time convergence rate on a single trajectory.

Q-Learning

Scalable Reinforcement Learning for Multi-Agent Networked Systems

no code implementations5 Dec 2019 Guannan Qu, Adam Wierman, Na Li

We study reinforcement learning (RL) in a setting with a network of agents whose states and actions interact in a local manner where the objective is to find localized policies such that the (discounted) global reward is maximized.

reinforcement-learning Reinforcement Learning (RL)

Exploiting Fast Decaying and Locality in Multi-Agent MDP with Tree Dependence Structure

no code implementations15 Sep 2019 Guannan Qu, Na Li

Further, under some special conditions, we prove that the gap between the approximated reward function and the true reward function is decaying exponentially fast as the length of the truncated Markov process gets longer.

Cannot find the paper you are looking for? You can Submit a new open access paper.