Improving Video Instance Segmentation by Light-weight Temporal Uncertainty Estimates

Instance segmentation with neural networks is an essential task in environment perception. In many works, it has been observed that neural networks can predict false positive instances with high confidence values and true positives with low ones. Thus, it is important to accurately model the uncertainties of neural networks in order to prevent safety issues and foster interpretability. In applications such as automated driving, the reliability of neural networks is of highest interest. In this paper, we present a time-dynamic approach to model uncertainties of instance segmentation networks and apply this to the detection of false positives as well as the estimation of prediction quality. The availability of image sequences in online applications allows for tracking instances over multiple frames. Based on an instances history of shape and uncertainty information, we construct temporal instance-wise aggregated metrics. The latter are used as input to post-processing models that estimate the prediction quality in terms of instance-wise intersection over union. The proposed method only requires a readily trained neural network (that may operate on single frames) and video sequence input. In our experiments, we further demonstrate the use of the proposed method by replacing the traditional score value from object detection and thereby improving the overall performance of the instance segmentation network.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here