Resource allocation algorithm for MEC based on Deep Reinforcement Learning

IEEE 2022  ·  Yijie Wang, Xin Chen, Ying Chen, Shougang Du ·

In recent years, driven by the commercialization of the 6th Generation Communication Technology (6G), an increasing number of 6G devices connected to mobile networks produces computation-intensive tasks such as ultra-high-resolution video streaming, inter-active visual reality (VR) gaming, augmented reality (AR). However, the computing capacity and the capacity of battery of the 6G devices are limited. The technology of computation offloading would offload the tasks from the IoT devices to the edge network in the scenario of mobile edge computing (MEC). Not only can solve the shortage of mobile user device in energy effciency, but also deal with the tasks in low latency. IoT devices can offload computing tasks or execute them locally to finish the work. In order to find the optimal allocation rate of local computing tasks and offloading tasks, a resource allocation policy gradient (RAPG) based DDPG is considered. Finally we analyze the performance of RAPG by contrasts with different resource allocation algorithms. Numerial simulation results showed that the RAPG can achieve the best allocate rate between the BS and local, also can reduce the overall system delay of task combination with minimum energy consumption.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here