Sparse Spectrum Gaussian Process for Bayesian Optimization

21 Jun 2019  ·  Ang Yang, Cheng Li, Santu Rana, Sunil Gupta, Svetha Venkatesh ·

We propose a novel sparse spectrum approximation of Gaussian process (GP) tailored for Bayesian optimization. Whilst the current sparse spectrum methods provide desired approximations for regression problems, it is observed that this particular form of sparse approximations generates an overconfident GP, i.e. it produces less epistemic uncertainty than the original GP. Since the balance between predictive mean and the predictive variance is the key determinant to the success of Bayesian optimization, the current sparse spectrum methods are less suitable for it. We derive a new regularized marginal likelihood for finding the optimal frequencies to fix this over-confidence issue, particularly for Bayesian optimization. The regularizer trades off the accuracy in the model fitting with a targeted increase in the predictive variance of the resultant GP. Specifically, we use the entropy of the global maximum distribution from the posterior GP as the regularizer that needs to be maximized. Since this distribution cannot be calculated analytically, we first propose a Thompson sampling based approach and then a more efficient sequential Monte Carlo based approach to estimate it. Later, we also show that the Expected Improvement acquisition function can be used as a proxy for the maximum distribution, thus making the whole process further efficient. Experiments show considerable improvement to Bayesian optimization convergence rate over the vanilla sparse spectrum method and over a full GP when its covariance matrix is ill-conditioned due to the presence of a large number of observations.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods