EagerMOT: 3D Multi-Object Tracking via Sensor Fusion

29 Apr 2021  ยท  Aleksandr Kim, Aljoลกa Oลกep, Laura Leal-Taixรฉ ยท

Multi-object tracking (MOT) enables mobile robots to perform well-informed motion planning and navigation by localizing surrounding objects in 3D space and time. Existing methods rely on depth sensors (e.g., LiDAR) to detect and track targets in 3D space, but only up to a limited sensing range due to the sparsity of the signal. On the other hand, cameras provide a dense and rich visual signal that helps to localize even distant objects, but only in the image domain. In this paper, we propose EagerMOT, a simple tracking formulation that eagerly integrates all available object observations from both sensor modalities to obtain a well-informed interpretation of the scene dynamics. Using images, we can identify distant incoming objects, while depth estimates allow for precise trajectory localization as soon as objects are within the depth-sensing range. With EagerMOT, we achieve state-of-the-art results across several MOT tasks on the KITTI and NuScenes datasets. Our code is available at https://github.com/aleksandrkim61/EagerMOT.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Multi-Object Tracking KITTI EagerMOT MOTA 96.61% # 1
MOTP 80% # 5
sAMOTA 94.94 # 1
Multi-Object Tracking and Segmentation KITTI MOTS EagerMOT HOTA 74.66 # 1
DetA 76.11 # 1
AssA 73.75 # 1
Multi-Object Tracking KITTI Tracking test EagerMOT HOTA 74.39 # 2
Multiple Object Tracking KITTI Tracking test EagerMOT MOTA 87.82 # 6
HOTA 74.39 # 3
3D Multi-Object Tracking nuScenes PolarMOT AMOTA 0.66 # 37

Methods


No methods listed for this paper. Add relevant methods here