Search Results for author: Anselm Haselhoff

Found 8 papers, 3 papers with code

Segmentation-guided Domain Adaptation for Efficient Depth Completion

no code implementations14 Oct 2022 Fabian Märkert, Martin Sunkel, Anselm Haselhoff, Stefan Rudolph

Complete depth information and efficient estimators have become vital ingredients in scene understanding for automated driving tasks.

Depth Completion Domain Adaptation +2

Parametric and Multivariate Uncertainty Calibration for Regression and Object Detection

1 code implementation4 Jul 2022 Fabian Küppers, Jonas Schneider, Anselm Haselhoff

Our experiments show that common detection models overestimate the spatial uncertainty in comparison to the observed error.

object-detection Object Detection +3

Confidence Calibration for Object Detection and Segmentation

no code implementations25 Feb 2022 Fabian Küppers, Anselm Haselhoff, Jan Kronenberger, Jonas Schneider

Calibrated confidence estimates obtained from neural networks are crucial, particularly for safety-critical applications such as autonomous driving or medical image diagnosis.

Autonomous Driving Instance Segmentation +5

Bayesian Confidence Calibration for Epistemic Uncertainty Modelling

1 code implementation21 Sep 2021 Fabian Küppers, Jan Kronenberger, Jonas Schneider, Anselm Haselhoff

We introduce Bayesian confidence calibration - a framework to obtain calibrated confidence estimates in conjunction with an uncertainty of the calibration method.

object-detection Object Detection +1

Dependency Decomposition and a Reject Option for Explainable Models

no code implementations11 Dec 2020 Jan Kronenberger, Anselm Haselhoff

Deploying machine learning models in safety-related do-mains (e. g. autonomous driving, medical diagnosis) demands for approaches that are explainable, robust against adversarial attacks and aware of the model uncertainty.

Autonomous Driving Explainable Models +2

Multivariate Confidence Calibration for Object Detection

1 code implementation28 Apr 2020 Fabian Küppers, Jan Kronenberger, Amirhossein Shantia, Anselm Haselhoff

Therefore, we present a novel framework to measure and calibrate biased (or miscalibrated) confidence estimates of object detection methods.

Classifier calibration Object +2

Cannot find the paper you are looking for? You can Submit a new open access paper.