3D Multi-Object Tracking
31 papers with code • 6 benchmarks • 7 datasets
Image: Weng et al
Latest papers with no code
ShaSTA-Fuse: Camera-LiDAR Sensor Fusion to Model Shape and Spatio-Temporal Affinities for 3D Multi-Object Tracking
Our main contributions include a novel fusion approach for combining camera and LiDAR sensory signals to learn affinities, and a first-of-its-kind multimodal sequential track confidence refinement technique that fuses 2D and 3D detections.
3D Multiple Object Tracking on Autonomous Driving: A Literature Review
This paper undertakes a comprehensive examination, assessment, and synthesis of the research landscape in this domain, remaining attuned to the latest developments in 3D MOT while suggesting prospective avenues for future investigation.
Which Framework is Suitable for Online 3D Multi-Object Tracking for Autonomous Driving with Automotive 4D Imaging Radar?
These provide the first benchmark and important insights for the future development of 4D imaging radar-based online 3D MOT algorithms.
Object Re-Identification from Point Clouds
To our knowledge, we are the first to study object re-identification from real point cloud observations.
ByteTrackV2: 2D and 3D Multi-Object Tracking by Associating Every Detection Box
We propose a hierarchical data association strategy to mine the true objects in low-score detection boxes, which alleviates the problems of object missing and fragmented trajectories.
End-to-end 3D Tracking with Decoupled Queries
In this work, we present an end-to-end framework for camera-based 3D multi-object tracking, called DQTrack.
ShaSTA: Modeling Shape and Spatio-Temporal Affinities for 3D Multi-Object Tracking
To address these issues in a unified framework, we propose to learn shape and spatio-temporal affinities between tracks and detections in consecutive frames.
Development and evaluation of automated localisation and reconstruction of all fruits on tomato plants in a greenhouse based on multi-view perception and 3D multi-object tracking
The accuracy of the representation was evaluated in a real-world environment, where successful representation and localisation of tomatoes in tomato plants were achieved, despite high levels of occlusion, with the total count of tomatoes estimated with a maximum error of 5. 08% and the tomatoes tracked with an accuracy up to 71. 47%.
DirectTracker: 3D Multi-Object Tracking Using Direct Image Alignment and Photometric Bundle Adjustment
Direct methods have shown excellent performance in the applications of visual odometry and SLAM.
CAMO-MOT: Combined Appearance-Motion Optimization for 3D Multi-Object Tracking with Camera-LiDAR Fusion
As such, we propose a novel camera-LiDAR fusion 3D MOT framework based on the Combined Appearance-Motion Optimization (CAMO-MOT), which uses both camera and LiDAR data and significantly reduces tracking failures caused by occlusion and false detection.