no code implementations • 14 Oct 2022 • Fabian Märkert, Martin Sunkel, Anselm Haselhoff, Stefan Rudolph
Complete depth information and efficient estimators have become vital ingredients in scene understanding for automated driving tasks.
1 code implementation • 4 Jul 2022 • Fabian Küppers, Jonas Schneider, Anselm Haselhoff
Our experiments show that common detection models overestimate the spatial uncertainty in comparison to the observed error.
no code implementations • 25 Feb 2022 • Fabian Küppers, Anselm Haselhoff, Jan Kronenberger, Jonas Schneider
Calibrated confidence estimates obtained from neural networks are crucial, particularly for safety-critical applications such as autonomous driving or medical image diagnosis.
1 code implementation • 21 Sep 2021 • Fabian Küppers, Jan Kronenberger, Jonas Schneider, Anselm Haselhoff
We introduce Bayesian confidence calibration - a framework to obtain calibrated confidence estimates in conjunction with an uncertainty of the calibration method.
no code implementations • 29 Apr 2021 • Sebastian Houben, Stephanie Abrecht, Maram Akila, Andreas Bär, Felix Brockherde, Patrick Feifel, Tim Fingscheidt, Sujan Sai Gannamaneni, Seyed Eghbal Ghobadi, Ahmed Hammam, Anselm Haselhoff, Felix Hauser, Christian Heinzemann, Marco Hoffmann, Nikhil Kapoor, Falk Kappel, Marvin Klingner, Jan Kronenberger, Fabian Küppers, Jonas Löhdefink, Michael Mlynarski, Michael Mock, Firas Mualla, Svetlana Pavlitskaya, Maximilian Poretschkin, Alexander Pohl, Varun Ravi-Kumar, Julia Rosenzweig, Matthias Rottmann, Stefan Rüping, Timo Sämann, Jan David Schneider, Elena Schulz, Gesina Schwalbe, Joachim Sicking, Toshika Srivastava, Serin Varghese, Michael Weber, Sebastian Wirkert, Tim Wirtz, Matthias Woehrle
Our paper addresses both machine learning experts and safety engineers: The former ones might profit from the broad range of machine learning topics covered and discussions on limitations of recent methods.
no code implementations • 8 Jan 2021 • Franziska Schwaiger, Maximilian Henne, Fabian Küppers, Felippe Schmoeller Roza, Karsten Roscher, Anselm Haselhoff
Based on previous work, we study the miscalibration of object detection models with respect to image location and box scale.
no code implementations • 11 Dec 2020 • Jan Kronenberger, Anselm Haselhoff
Deploying machine learning models in safety-related do-mains (e. g. autonomous driving, medical diagnosis) demands for approaches that are explainable, robust against adversarial attacks and aware of the model uncertainty.
1 code implementation • 28 Apr 2020 • Fabian Küppers, Jan Kronenberger, Amirhossein Shantia, Anselm Haselhoff
Therefore, we present a novel framework to measure and calibrate biased (or miscalibrated) confidence estimates of object detection methods.