Increasing Trustworthiness of Deep Neural Networks via Accuracy Monitoring

3 Jul 2020  ·  Zhihui Shao, Jianyi Yang, Shaolei Ren ·

Inference accuracy of deep neural networks (DNNs) is a crucial performance metric, but can vary greatly in practice subject to actual test datasets and is typically unknown due to the lack of ground truth labels. This has raised significant concerns with trustworthiness of DNNs, especially in safety-critical applications. In this paper, we address trustworthiness of DNNs by using post-hoc processing to monitor the true inference accuracy on a user's dataset. Concretely, we propose a neural network-based accuracy monitor model, which only takes the deployed DNN's softmax probability output as its input and directly predicts if the DNN's prediction result is correct or not, thus leading to an estimate of the true inference accuracy. The accuracy monitor model can be pre-trained on a dataset relevant to the target application of interest, and only needs to actively label a small portion (1% in our experiments) of the user's dataset for model transfer. For estimation robustness, we further employ an ensemble of monitor models based on the Monte-Carlo dropout method. We evaluate our approach on different deployed DNN models for image classification and traffic sign detection over multiple datasets (including adversarial samples). The result shows that our accuracy monitor model provides a close-to-true accuracy estimation and outperforms the existing baseline methods.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Classification STL-10 MP* Percentage correct 93.19 # 28
Image Classification STL-10 TS Percentage correct 88.03 # 45
Image Classification STL-10 Entropy Percentage correct 71.65 # 87
Image Classification STL-10 Accuracy Monitoring Percentage correct 68.62 # 95
Image Classification STL-10 MP Percentage correct 71.05 # 89

Methods