Deep Ensemble as a Gaussian Process Approximate Posterior

30 Apr 2022  ·  Zhijie Deng, Feng Zhou, Jianfei Chen, Guoqiang Wu, Jun Zhu ·

Deep Ensemble (DE) is an effective alternative to Bayesian neural networks for uncertainty quantification in deep learning. The uncertainty of DE is usually conveyed by the functional inconsistency among the ensemble members, say, the disagreement among their predictions. Yet, the functional inconsistency stems from unmanageable randomness and may easily collapse in specific cases. To render the uncertainty of DE reliable, we propose a refinement of DE where the functional inconsistency is explicitly characterized, and further tuned w.r.t. the training data and certain priori beliefs. Specifically, we describe the functional inconsistency with the empirical covariance of the functions dictated by ensemble members, which, along with the mean, define a Gaussian process (GP). Then, with specific priori uncertainty imposed, we maximize functional evidence lower bound to make the GP specified by DE approximate the Bayesian posterior. In this way, we relate DE to Bayesian inference to enjoy reliable Bayesian uncertainty. Moreover, we provide strategies to make the training efficient. Our approach consumes only marginally added training cost than the standard DE, but achieves better uncertainty quantification than DE and its variants across diverse scenarios.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods