Cooperative Perception with Deep Reinforcement Learning for Connected Vehicles

23 Apr 2020  ·  Shunsuke Aoki, Takamasa Higuchi, Onur Altintas ·

Sensor-based perception on vehicles are becoming prevalent and important to enhance the road safety. Autonomous driving systems use cameras, LiDAR, and radar to detect surrounding objects, while human-driven vehicles use them to assist the driver. However, the environmental perception by individual vehicles has the limitations on coverage and/or detection accuracy. For example, a vehicle cannot detect objects occluded by other moving/static obstacles. In this paper, we present a cooperative perception scheme with deep reinforcement learning to enhance the detection accuracy for the surrounding objects. By using the deep reinforcement learning to select the data to transmit, our scheme mitigates the network load in vehicular communication networks and enhances the communication reliability. To design, test, and verify the cooperative perception scheme, we develop a Cooperative & Intelligent Vehicle Simulation (CIVS) Platform, which integrates three software components: traffic simulator, vehicle simulator, and object classifier. We evaluate that our scheme decreases packet loss and thereby increases the detection accuracy by up to 12%, compared to the baseline protocol.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here