Faded-Experience Trust Region Policy Optimization for Model-Free Power Allocation in Interference Channel

4 Aug 2020  ·  Mohammad G. Khoshkholgh, Halim Yanikomeroglu ·

Policy gradient reinforcement learning techniques enable an agent to directly learn an optimal action policy through the interactions with the environment. Nevertheless, despite its advantages, it sometimes suffers from slow convergence speed. Inspired by human decision making approach, we work toward enhancing its convergence speed by augmenting the agent to memorize and use the recently learned policies. We apply our method to the trust-region policy optimization (TRPO), primarily developed for locomotion tasks, and propose faded-experience (FE) TRPO. To substantiate its effectiveness, we adopt it to learn continuous power control in an interference channel when only noisy location information of devices is available. Results indicate that with FE-TRPO it is possible to almost double the learning speed compared to TRPO. Importantly, our method neither increases the learning complexity nor imposes performance loss.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods