Gradient-Based Quantification of Epistemic Uncertainty for Deep Object Detectors

9 Jul 2021  ·  Tobias Riedlinger, Matthias Rottmann, Marius Schubert, Hanno Gottschalk ·

The vast majority of uncertainty quantification methods for deep object detectors such as variational inference are based on the network output. Here, we study gradient-based epistemic uncertainty metrics for deep object detectors to obtain reliable confidence estimates. We show that they contain predictive information and that they capture information orthogonal to that of common, output-based uncertainty estimation methods like Monte-Carlo dropout and deep ensembles. To this end, we use meta classification and meta regression to produce confidence estimates using gradient metrics and other baselines for uncertainty quantification which are in principle applicable to any object detection architecture. Specifically, we employ false positive detection and prediction of localization quality to investigate uncertainty content of our metrics and compute the calibration errors of meta classifiers. Moreover, we use them as a post-processing filter mechanism to the object detection pipeline and compare object detection performance. Our results show that gradient-based uncertainty is itself on par with output-based methods across different detectors and datasets. More significantly, combined meta classifiers based on gradient and output-based metrics outperform the standalone models. Based on this result, we conclude that gradient uncertainty adds orthogonal information to output-based methods. This suggests that variational inference may be supplemented by gradient-based uncertainty to obtain improved confidence measures, contributing to down-stream applications of deep object detectors and improving their probabilistic reliability.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods