Intelligent Resource Allocation in Dense LoRa Networks using Deep Reinforcement Learning

22 Dec 2020  ·  Inaam Ilahi, Muhammad Usama, Muhammad Omer Farooq, Muhammad Umar Janjua, Junaid Qadir ·

The anticipated increase in the count of IoT devices in the coming years motivates the development of efficient algorithms that can help in their effective management while keeping the power consumption low. In this paper, we propose an intelligent multi-channel resource allocation algorithm for dense LoRa networks termed LoRaDRL and provide a detailed performance evaluation. Our results demonstrate that the proposed algorithm not only significantly improves LoRaWAN's packet delivery ratio (PDR) but is also able to support mobile end-devices (EDs) while ensuring lower power consumption hence increasing both the lifetime and capacity of the network.} Most previous works focus on proposing different MAC protocols for improving the network capacity, i.e., LoRaWAN, delay before transmit etc. We show that through the use of LoRaDRL, we can achieve the same efficiency with ALOHA \textcolor{black}{compared to LoRaSim, and LoRa-MAB while moving the complexity from EDs to the gateway thus making the EDs simpler and cheaper. Furthermore, we test the performance of LoRaDRL under large-scale frequency jamming attacks and show its adaptiveness to the changes in the environment. We show that LoRaDRL's output improves the performance of state-of-the-art techniques resulting in some cases an improvement of more than 500\% in terms of PDR compared to learning-based techniques.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here