Fast calculation of Gaussian Process multiple-fold cross-validation residuals and their covariances

8 Jan 2021  ·  David Ginsbourger, Cedric Schärer ·

We generalize fast Gaussian process leave-one-out formulae to multiple-fold cross-validation, highlighting in turn the covariance structure of cross-validation residuals in both Simple and Universal Kriging frameworks. We illustrate how resulting covariances affect model diagnostics. We further establish in the case of noiseless observations that correcting for covariances between residuals in cross-validation-based estimation of the scale parameter leads back to MLE. Also, we highlight in broader settings how differences between pseudo-likelihood and likelihood methods boil down to accounting or not for residual covariances. The proposed fast calculation of cross-validation residuals is implemented and benchmarked against a naive implementation. Numerical experiments highlight the accuracy and substantial speed-ups that our approach enables. However, as supported by a discussion on main drivers of computational costs and by a numerical benchmark, speed-ups steeply decline as the number of folds (say, all sharing the same size) decreases. An application to a contaminant localization test case illustrates that grouping clustered observations in folds may help improving model assessment and parameter fitting compared to Leave-One-Out. Overall, our results enable fast multiple-fold cross-validation, have direct consequences in model diagnostics, and pave the way to future work on hyperparameter fitting and on the promising field of goal-oriented fold design.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods