Microservice Deployment in Edge Computing Based on Deep Q Learning

The microservice deployment strategy is promising in reducing the overall service response time in the microservice-oriented edge computing platform. However, existing works ignore the effect of different interaction frequencies among microservices and the decrease in service execution performance caused by the increased node loads. In this article, we first model the invocation relationships among microservices as an undirected and weighted interaction graph to characterize the communication overhead. Then, we propose a multi-objective microservice deployment problem (MMDP) in edge computing. MMDP aims to minimize the communication overhead while achieving load balance between edge nodes. Without the requirement for domain experts, we propose Reward Sharing Deep Q Learning (RSDQL), a learning-based algorithm, to solve MMDP and obtain the optimal deployment strategy. In addition, to improve the scalability of the services, we propose an Elastic Scaling algorithm (ES) based on heuristics to deal with the dynamic pressure of requests. Finally, we conduct a series of experiments in Kubernetes to evaluate the performance of our approach. Experimental results indicate that, compared with interaction-aware strategy and Kubernetes default strategy, RSDQL has shorter response times, more balanced resource loads, and makes services scale elastically according to the request pressure

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here