Lifting Multi-View Detection and Tracking to the Bird's Eye View

19 Mar 2024  ·  Torben Teepe, Philipp Wolters, Johannes Gilg, Fabian Herzog, Gerhard Rigoll ·

Taking advantage of multi-view aggregation presents a promising solution to tackle challenges such as occlusion and missed detection in multi-object tracking and detection. Recent advancements in multi-view detection and 3D object recognition have significantly improved performance by strategically projecting all views onto the ground plane and conducting detection analysis from a Bird's Eye View. In this paper, we compare modern lifting methods, both parameter-free and parameterized, to multi-view aggregation. Additionally, we present an architecture that aggregates the features of multiple times steps to learn robust detection and combines appearance- and motion-based cues for tracking. Most current tracking approaches either focus on pedestrians or vehicles. In our work, we combine both branches and add new challenges to multi-view detection with cross-scene setups. Our method generalizes to three public datasets across two domains: (1) pedestrian: Wildtrack and MultiviewX, and (2) roadside perception: Synthehicle, achieving state-of-the-art performance in detection and tracking. https://github.com/tteepe/TrackTacular

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Multiview Detection MultiviewX TrackTacular (Bilinear Sampling) MODA 96.5 # 1
MODP 75.0 # 7
Recall 97.1 # 2
Multi-Object Tracking MultiviewX TrackTacular (Bilinear Sampling) IDF1 85.6 # 1
MOTA 92.4 # 1
Multiview Detection Wildtrack TrackTacular (Depth Splatting) MODA 93.2 # 3
MODP 77.5 # 4
Recall 95.8 # 3
Multi-Object Tracking Wildtrack TrackTacular (Bilinear Sampling) IDF1 95.3 # 1
MOTA 91.8 # 1

Methods