A Deep Multi-Agent Reinforcement Learning Approach to Autonomous Separation Assurance

17 Mar 2020  ·  Marc Brittain, Xuxi Yang, Peng Wei ·

A novel deep multi-agent reinforcement learning framework is proposed to identify and resolve conflicts among a variable number of aircraft in a high-density, stochastic, and dynamic sector. Currently the sector capacity is constrained by human air traffic controller's cognitive limitation. We investigate the feasibility of a new concept (autonomous separation assurance) and a new approach to push the sector capacity above human cognitive limitation. We propose the concept of using distributed vehicle autonomy to ensure separation, instead of a centralized sector air traffic controller. Our proposed framework utilizes Proximal Policy Optimization (PPO) that we modify to incorporate an attention network. This allows the agents to have access to variable aircraft information in the sector in a scalable, efficient approach to achieve high traffic throughput under uncertainty. Agents are trained using a centralized learning, decentralized execution scheme where one neural network is learned and shared by all agents. The proposed framework is validated on three challenging case studies in the BlueSky air traffic control environment. Numerical results show the proposed framework significantly reduces offline training time, increases performance, and results in a more efficient policy.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here