Search Results for author: Yuanqi Gao

Found 5 papers, 1 papers with code

An Optimization Method-Assisted Ensemble Deep Reinforcement Learning Algorithm to Solve Unit Commitment Problems

no code implementations9 Jun 2022 Jingtao Qin, Yuanqi Gao, Mikhail Bragin, Nanpeng Yu

Unit commitment (UC) is a fundamental problem in the day-ahead electricity market, and it is critical to solve UC problems efficiently.

Q-Learning reinforcement-learning +1

A Reinforcement Learning-based Volt-VAR Control Dataset and Testing Environment

1 code implementation20 Apr 2022 Yuanqi Gao, Nanpeng Yu

To facilitate the development of reinforcement learning (RL) based power distribution system Volt-VAR control (VVC), this paper introduces a suite of open-source datasets for RL-based VVC algorithm research that is sample efficient, safe, and robust.

reinforcement-learning Reinforcement Learning (RL)

Learning to Operate an Electric Vehicle Charging Station Considering Vehicle-grid Integration

no code implementations1 Nov 2021 Zuzhao Ye, Yuanqi Gao, Nanpeng Yu

In this paper, we propose a novel centralized allocation and decentralized execution (CADE) reinforcement learning (RL) framework to maximize the charging station's profit.

Model Predictive Control reinforcement-learning +1

Consensus Multi-Agent Reinforcement Learning for Volt-VAR Control in Power Distribution Networks

no code implementations6 Jul 2020 Yuanqi Gao, Wei Wang, Nanpeng Yu

Volt-VAR control (VVC) is a critical application in active distribution network management system to reduce network losses and improve voltage profile.

Management Multi-agent Reinforcement Learning +2

Information Losses in Neural Classifiers from Sampling

no code implementations15 Feb 2019 Brandon Foggo, Nanpeng Yu, Jie Shi, Yuanqi Gao

It then bounds this expected total variation as a function of the size of randomly sampled datasets in a fairly general setting, and without bringing in any additional dependence on model complexity.

Active Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.