Efficient Drone Mobility Support Using Reinforcement Learning

21 Nov 2019  ·  Yun Chen, Xingqin Lin, Talha Khan, Mohammad Mozaffari ·

Flying drones can be used in a wide range of applications and services from surveillance to package delivery. To ensure robust control and safety of drone operations, cellular networks need to provide reliable wireless connectivity to drone user equipments (UEs). To date, existing mobile networks have been primarily designed and optimized for serving ground UEs, thus making the mobility support in the sky challenging. In this paper, a novel handover (HO) mechanism is developed for a cellular-connected drone system to ensure robust wireless connectivity and mobility support for drone-UEs. By leveraging tools from reinforcement learning, HO decisions are dynamically optimized using a Q-learning algorithm to provide an efficient mobility support in the sky. The results show that the proposed approach can significantly reduce (e.g., by 80%) the number of HOs, while maintaining connectivity, compared to the baseline HO scheme in which the drone always connects to the strongest cell.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods