Deep Reinforcement Learning based Model-free On-line Dynamic Multi-Microgrid Formation to Enhance Resilience

6 Mar 2022  ·  Jin Zhao, Member, Fangxing Li, Fellow, Srijib Mukherjee, Senior Member, Christopher Sticht ·

Multi-microgrid formation (MMGF) is a promising solution to enhance power system resilience. This paper proposes a new deep reinforcement learning (RL) based model-free on-line dynamic multi-MG formation (MMGF) scheme. The dynamic MMGF problem is formulated as a Markov decision process, and a complete deep RL framework is specially designed for the topology-transformable micro-grids. In order to reduce the large action space caused by flexible switch operations, a topology transformation method is proposed and an action-decoupling Q-value is applied. Then, a CNN based multi-buffer double deep Q-network (CM-DDQN) is developed to further improve the learning ability of original DQN method. The proposed deep RL method provides real-time computing to support on-line dynamic MMGF scheme, and the scheme handles a long-term resilience enhancement problem using adaptive on-line MMGF to defend changeable conditions. The effectiveness of the proposed method is validated using a 7-bus system and the IEEE 123-bus system. The results show strong learning ability, timely response for varying system conditions and convincing resilience enhancement.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods