Deep Learning-based Cooperative LiDAR Sensing for Improved Vehicle Positioning

26 Feb 2024  ·  Luca Barbieri, Bernardo Camajori Tedeschini, Mattia Brambilla, Monica Nicoli ·

Accurate positioning is known to be a fundamental requirement for the deployment of Connected Automated Vehicles (CAVs). To meet this need, a new emerging trend is represented by cooperative methods where vehicles fuse information coming from navigation and imaging sensors via Vehicle-to-Everything (V2X) communications for joint positioning and environmental perception. In line with this trend, this paper proposes a novel data-driven cooperative sensing framework, termed Cooperative LiDAR Sensing with Message Passing Neural Network (CLS-MPNN), where spatially-distributed vehicles collaborate in perceiving the environment via LiDAR sensors. Vehicles process their LiDAR point clouds using a Deep Neural Network (DNN), namely a 3D object detector, to identify and localize possible static objects present in the driving environment. Data are then aggregated by a centralized infrastructure that performs Data Association (DA) using a Message Passing Neural Network (MPNN) and runs the Implicit Cooperative Positioning (ICP) algorithm. The proposed approach is evaluated using two realistic driving scenarios generated by a high-fidelity automated driving simulator. The results show that CLS-MPNN outperforms a conventional non-cooperative localization algorithm based on Global Navigation Satellite System (GNSS) and a state-of-the-art cooperative Simultaneous Localization and Mapping (SLAM) method while approaching the performances of an oracle system with ideal sensing and perfect association.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here