Learning Power Control from a Fixed Batch of Data

5 Aug 2020  ·  Mohammad G. Khoshkholgh, Halim Yanikomeroglu ·

We address how to exploit power control data, gathered from a monitored environment, for performing power control in an unexplored environment. We adopt offline deep reinforcement learning, whereby the agent learns the policy to produce the transmission powers solely by using the data. Experiments demonstrate that despite discrepancies between the monitored and unexplored environments, the agent successfully learns the power control very quickly, even if the objective functions in the monitored and unexplored environments are dissimilar. About one third of the collected data is sufficient to be of high-quality and the rest can be from any sub-optimal algorithm.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here