1 code implementation • 18 Mar 2024 • Jonas Schramm, Niclas Vödisch, Kürsat Petek, B Ravi Kiran, Senthil Yogamani, Wolfram Burgard, Abhinav Valada
Semantic scene segmentation from a bird's-eye-view (BEV) perspective plays a crucial role in facilitating planning and decision-making for mobile robots.
no code implementations • 18 Oct 2023 • Rohit Mohan, Kiran Kumaraswamy, Juana Valeria Hurtado, Kürsat Petek, Abhinav Valada
Deep learning has led to remarkable strides in scene understanding with panoptic segmentation emerging as a key holistic scene interpretation task.
1 code implementation • 19 Sep 2023 • Markus Käppeler, Kürsat Petek, Niclas Vödisch, Wolfram Burgard, Abhinav Valada
Concurrently, recent breakthroughs in visual representation learning have sparked a paradigm shift leading to the advent of large foundation models that can be trained with completely unlabeled images.
1 code implementation • 17 Mar 2023 • Niclas Vödisch, Kürsat Petek, Wolfram Burgard, Abhinav Valada
Operating a robot in the open world requires a high level of robustness with respect to previously unseen environments.
no code implementations • CVPR 2023 • Nikhil Gosala, Kürsat Petek, Paulo L. J. Drews-Jr, Wolfram Burgard, Abhinav Valada
Implicit supervision trains the model by enforcing spatial consistency of the scene over time based on FV semantic sequences, while explicit supervision exploits BEV pseudolabels generated from FV semantic annotations and self-supervised depth estimates.
no code implementations • 20 Oct 2021 • Kürsat Petek, Kshitij Sirohi, Daniel Büscher, Wolfram Burgard
Robust localization in dense urban scenarios using a low-cost sensor setup and sparse HD maps is highly relevant for the current advances in autonomous driving, but remains a challenging topic in research.