1 code implementation • 18 Apr 2024 • Masaki Adachi, Satoshi Hayakawa, Martin Jørgensen, Saad Hamid, Harald Oberhauser, Michael A. Osborne
Parallelisation in Bayesian optimisation is a common strategy but faces several challenges: the need for flexibility in acquisition functions and kernel choices, flexibility dealing with discrete and continuous variables simultaneously, model misspecification, and lastly fast massive parallelisation.
no code implementations • 2 Feb 2024 • Juliusz Ziomek, Masaki Adachi, Michael A. Osborne
Previously proposed algorithms with the no-regret property were only able to handle the special case of unknown lengthscales, reproducing kernel Hilbert space norm and applied only to the frequentist case.
no code implementations • 1 Feb 2024 • Theodore Papamarkou, Maria Skoularidou, Konstantina Palla, Laurence Aitchison, Julyan Arbel, David Dunson, Maurizio Filippone, Vincent Fortuin, Philipp Hennig, Jose Miguel Hernandez Lobato, Aliaksandr Hubin, Alexander Immer, Theofanis Karaletsos, Mohammad Emtiyaz Khan, Agustinus Kristiadi, Yingzhen Li, Stephan Mandt, Christopher Nemeth, Michael A. Osborne, Tim G. J. Rudner, David Rügamer, Yee Whye Teh, Max Welling, Andrew Gordon Wilson, Ruqi Zhang
In the current landscape of deep learning research, there is a predominant emphasis on achieving high predictive accuracy in supervised tasks involving large image and language datasets.
1 code implementation • 26 Oct 2023 • Masaki Adachi, Brady Planden, David A. Howey, Michael A. Osborne, Sebastian Orbell, Natalia Ares, Krikamol Muandet, Siu Lun Chau
Like many optimizers, Bayesian optimization often falls short of gaining user trust due to opacity.
1 code implementation • 9 Jun 2023 • Masaki Adachi, Satoshi Hayakawa, Martin Jørgensen, Xingchen Wan, Vu Nguyen, Harald Oberhauser, Michael A. Osborne
Active learning parallelization is widely used, but typically relies on fixing the batch size throughout experimentation.
1 code implementation • NeurIPS 2021 • Tim G. J. Rudner, Cong Lu, Michael A. Osborne, Yarin Gal, Yee Whye Teh
KL-regularized reinforcement learning from expert demonstrations has proved successful in improving the sample efficiency of deep reinforcement learning algorithms, allowing them to be applied to challenging physical real-world tasks.
1 code implementation • 28 Oct 2022 • Masaki Adachi, Yannick Kuhn, Birger Horstmann, Arnulf Latz, Michael A. Osborne, David A. Howey
We show that popular model selection criteria, such as root-mean-square error and Bayesian information criterion, can fail to select a parsimonious model in the case of a multimodal posterior.
2 code implementations • 18 Oct 2022 • Samuel Daulton, Xingchen Wan, David Eriksson, Maximilian Balandat, Michael A. Osborne, Eytan Bakshy
We prove that under suitable reparameterizations, the BO policy that maximizes the probabilistic objective is the same as that which maximizes the AF, and therefore, PR enjoys the same regret bounds as the original BO policy using the underlying AF.
1 code implementation • 4 Oct 2022 • Michael K. Cohen, Samuel Daulton, Michael A. Osborne
We present a new kernel that allows for Gaussian process regression in $O((n+m)\log(n+m))$ time.
no code implementations • 1 Sep 2022 • Martin Jørgensen, Michael A. Osborne
We introduce a kernel that allows the number of summarising variables to grow exponentially with the number of input features, but requires only linear cost in both number of observations and input features.
2 code implementations • 19 Jul 2022 • Xingchen Wan, Cong Lu, Jack Parker-Holder, Philip J. Ball, Vu Nguyen, Binxin Ru, Michael A. Osborne
Leveraging the new highly parallelizable Brax physics engine, we show that these innovations lead to large performance gains, significantly outperforming the tuned baseline while learning entire configurations on the fly.
2 code implementations • 9 Jun 2022 • Masaki Adachi, Satoshi Hayakawa, Martin Jørgensen, Harald Oberhauser, Michael A. Osborne
Empirically, we find that our approach significantly outperforms the sampling efficiency of both state-of-the-art BQ techniques and Nested Sampling in various real-world datasets, including lithium-ion battery analytics.
2 code implementations • 9 Jun 2022 • Cong Lu, Philip J. Ball, Tim G. J. Rudner, Jack Parker-Holder, Michael A. Osborne, Yee Whye Teh
Using this suite of benchmarking tasks, we show that simple modifications to two popular vision-based online reinforcement learning algorithms, DreamerV2 and DrQ-v2, suffice to outperform existing offline RL methods and establish competitive baselines for continuous control in the visual domain.
1 code implementation • 15 Feb 2022 • Samuel Daulton, Sait Cakmak, Maximilian Balandat, Michael A. Osborne, Enlu Zhou, Eytan Bakshy
In many manufacturing processes, the design parameters are subject to random input noise, resulting in a product that is often less performant than expected.
1 code implementation • 4 Nov 2021 • Xingchen Wan, Henry Kenlay, Binxin Ru, Arno Blaas, Michael A. Osborne, Xiaowen Dong
While the majority of the literature focuses on such vulnerability in node-level classification tasks, little effort has been dedicated to analysing adversarial attacks on graph-level classification, an important problem with numerous real-life applications such as biochemistry and social network analysis.
no code implementations • 22 Oct 2021 • Vu Nguyen, Marc Peter Deisenroth, Michael A. Osborne
More specifically, we propose the first use of such bounds to improve Gaussian process (GP) posterior sampling and Bayesian optimization (BO).
no code implementations • 8 Oct 2021 • Cong Lu, Philip J. Ball, Jack Parker-Holder, Michael A. Osborne, Stephen J. Roberts
Significant progress has been made recently in offline model-based reinforcement learning, approaches which leverage a learned dynamics model.
no code implementations • 5 Jul 2021 • Edward Wagstaff, Fabian B. Fuchs, Martin Engelcke, Michael A. Osborne, Ingmar Posner
We provide a theoretical analysis of Deep Sets which shows that this universal approximation property is only guaranteed if the model's latent space is sufficiently high-dimensional.
no code implementations • 14 Jun 2021 • Saad Hamid, Sebastian Schulze, Michael A. Osborne, Stephen J. Roberts
Marginalising over families of Gaussian Process kernels produces flexible model classes with well-calibrated uncertainty estimates.
1 code implementation • 14 Feb 2021 • Xingchen Wan, Vu Nguyen, Huong Ha, Binxin Ru, Cong Lu, Michael A. Osborne
High-dimensional black-box optimisation remains an important yet notoriously challenging problem.
1 code implementation • NeurIPS 2020 • Vu Nguyen, Vaden Masrani, Rob Brekelmans, Michael A. Osborne, Frank Wood
Achieving the full promise of the Thermodynamic Variational Objective (TVO), a recently proposed variational lower bound on the log evidence involving a one-dimensional Riemann integral approximation, requires choosing a "schedule" of sorted discretization points.
1 code implementation • 13 Jun 2020 • Vu Nguyen, Tam Le, Makoto Yamada, Michael A. Osborne
Building upon tree-Wasserstein (TW), which is a negative definite variant of OT, we develop a novel discrepancy for neural architectures, and demonstrate it within a Gaussian process surrogate model for the sequential NAS settings.
1 code implementation • NeurIPS 2020 • Vu Nguyen, Sebastian Schulze, Michael A. Osborne
We demonstrate the efficiency of our algorithm by tuning hyperparameters for the training of deep reinforcement learning agents and convolutional neural networks.
no code implementations • 22 Aug 2019 • Favour M. Nyikosa, Michael A. Osborne, Stephen J. Roberts
Financial markets are complex environments that produce enormous amounts of noisy and non-stationary data.
2 code implementations • ICML 2020 • Binxin Ru, Ahsan S. Alvi, Vu Nguyen, Michael A. Osborne, Stephen J. Roberts
Efficient optimisation of black-box problems that comprise both continuous and categorical inputs is important, yet poses significant challenges.
1 code implementation • ICML 2020 • Vu Nguyen, Michael A. Osborne
In this paper, we consider a new setting in BO in which the knowledge of the optimum output f* is available.
no code implementations • 26 Feb 2019 • Henry Chai, Jean-Francois Ton, Roman Garnett, Michael A. Osborne
We present a novel technique for tailoring Bayesian quadrature (BQ) to model selection.
1 code implementation • 22 Feb 2019 • Gabriele Abbati, Philippe Wenk, Michael A. Osborne, Andreas Krause, Bernhard Schölkopf, Stefan Bauer
Stochastic differential equations are an important modeling class in many disciplines.
2 code implementations • 17 Feb 2019 • Philippe Wenk, Gabriele Abbati, Michael A. Osborne, Bernhard Schölkopf, Andreas Krause, Stefan Bauer
Parameter inference in ordinary differential equations is an important problem in many applied sciences and in engineering, especially in a data-scarce setting.
1 code implementation • 29 Jan 2019 • Ahsan S. Alvi, Binxin Ru, Jan Calliess, Stephen J. Roberts, Michael A. Osborne
Batch Bayesian optimisation (BO) has been successfully applied to hyperparameter tuning using parallel computing, but it is wasteful of resources: workers that complete jobs ahead of others are left idle.
no code implementations • 26 Nov 2018 • Francois-Xavier Briol, Chris. J. Oates, Mark Girolami, Michael A. Osborne, Dino Sejdinovic
This article is the rejoinder for the paper "Probabilistic Integration: A Role in Statistical Computation?"
no code implementations • 17 Jul 2018 • Robert R. Richardson, Michael A. Osborne, David A. Howey
Accurately predicting the future health of batteries is necessary to ensure reliable operation, minimise maintenance costs, and calculate the value of energy storage investments.
no code implementations • 27 May 2018 • Supratik Paul, Michael A. Osborne, Shimon Whiteson
Policy gradient methods ignore the potential value of adjusting environment variables: unobservable state features that are randomly determined by the environment in a physical setting, but are controllable in a simulator.
1 code implementation • ICML 2018 • Mark McLeod, Michael A. Osborne, Stephen J. Roberts
We develop the first Bayesian Optimization algorithm, BLOSSOM, which selects between multiple alternative acquisition functions and traditional local optimization at each step.
no code implementations • 28 Mar 2018 • Zhikuan Zhao, Jack K. Fitzsimons, Michael A. Osborne, Stephen J. Roberts, Joseph F. Fitzsimons
Gaussian processes (GPs) are important models in supervised machine learning.
no code implementations • 9 Mar 2018 • Favour M. Nyikosa, Michael A. Osborne, Stephen J. Roberts
We propose practical extensions to Bayesian optimization for solving dynamic problems.
1 code implementation • ICML 2018 • Binxin Ru, Mark McLeod, Diego Granziol, Michael A. Osborne
Information-theoretic Bayesian optimisation techniques have demonstrated state-of-the-art performance in tackling important global optimisation problems.
2 code implementations • NeurIPS 2016 • Tom Rainforth, Tuan Anh Le, Jan-Willem van de Meent, Michael A. Osborne, Frank Wood
We present the first general purpose framework for marginal maximum a posteriori estimation of probabilistic program variables.
1 code implementation • 13 Jul 2017 • Nikitas Rontsis, Michael A. Osborne, Paul J. Goulart
Our acquisition function is a lower bound on the well-known Expected Improvement function, which requires evaluation of a Gaussian Expectation over a multivariate piecewise affine function.
no code implementations • 2 May 2017 • Syed Ali Asad Rizvi, Stephen J. Roberts, Michael A. Osborne, Favour Nyikosa
In this paper we use Gaussian Process (GP) regression to propose a novel approach for predicting volatility of financial returns by forecasting the envelopes of the time series.
no code implementations • 23 Mar 2017 • Justin D. Bewsher, Alessandra Tosi, Michael A. Osborne, Stephen J. Roberts
We fill the gap in the existing literature by deriving the moments of the arc length for a stationary GP with multiple output dimensions.
no code implementations • 16 Mar 2017 • Robert R. Richardson, Michael A. Osborne, David A. Howey
Accurately predicting the future capacity and remaining useful life of batteries is necessary to ensure reliable system operation and to minimise maintenance costs.
1 code implementation • 13 Mar 2017 • Mark McLeod, Michael A. Osborne, Stephen J. Roberts
We propose a novel Bayesian Optimization approach for black-box functions with an environmental variable whose value determines the tradeoff between evaluation cost and the fidelity of the evaluations.
no code implementations • 24 May 2016 • Supratik Paul, Konstantinos Chatzilygeroudis, Kamil Ciosek, Jean-Baptiste Mouret, Michael A. Osborne, Shimon Whiteson
ALOQ is robust to the presence of significant rare events, which may not be observable under random sampling, but play a substantial role in determining the optimal policy.
1 code implementation • 22 Feb 2016 • Kurt Cutajar, Michael A. Osborne, John P. Cunningham, Maurizio Filippone
Preconditioning is a common approach to alleviating this issue.
no code implementations • 3 Dec 2015 • François-Xavier Briol, Chris. J. Oates, Mark Girolami, Michael A. Osborne, Dino Sejdinovic
A research frontier has emerged in scientific computation, wherein numerical error is regarded as a source of epistemic uncertainty that can be modelled.
no code implementations • 27 Oct 2015 • Thomas Nickson, Tom Gunter, Chris Lloyd, Michael A. Osborne, Stephen Roberts
We present Blitzkriging, a new approach to fast inference for Gaussian processes, applicable to regression, optimisation and classification.
no code implementations • 8 Sep 2015 • Arnold Salas, Stephen J. Roberts, Michael A. Osborne
Online Passive-Aggressive (PA) learning is a class of online margin-based algorithms suitable for a wide range of real-time prediction tasks, including classification and regression.
no code implementations • NeurIPS 2015 • François-Xavier Briol, Chris. J. Oates, Mark Girolami, Michael A. Osborne
There is renewed interest in formulating integration as an inference problem, motivated by obtaining a full distribution over numerical error that can be propagated through subsequent computation.
no code implementations • 3 Jun 2015 • Philipp Hennig, Michael A. Osborne, Mark Girolami
We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations.
no code implementations • NeurIPS 2014 • Tom Gunter, Michael A. Osborne, Roman Garnett, Philipp Hennig, Stephen J. Roberts
We propose a novel sampling framework for inference in probabilistic models: an active learning approach that converges more quickly (in wall-clock time) than Markov chain Monte Carlo (MCMC) benchmarks.
no code implementations • 2 Nov 2014 • Chris Lloyd, Tom Gunter, Michael A. Osborne, Stephen J. Roberts
We present the first fully variational Bayesian inference scheme for continuous Gaussian-process-modulated Poisson processes.
no code implementations • 14 Sep 2014 • Kevin Swersky, David Duvenaud, Jasper Snoek, Frank Hutter, Michael A. Osborne
In practical Bayesian optimization, we must often search over structures with differing numbers of parameters.
no code implementations • 30 Jul 2014 • Thomas Nickson, Michael A. Osborne, Steven Reece, Stephen J. Roberts
However, the state of the art in Bayesian optimisation is incapable of scaling to the large number of evaluations of algorithm performance required to fit realistic models to complex, big data.
no code implementations • 25 Jul 2014 • Tom Gunter, Chris Lloyd, Michael A. Osborne, Stephen J. Roberts
This paper presents a Bayesian generative model for dependent Cox point processes, alongside an efficient inference scheme which scales as if the point processes were modelled independently.
no code implementations • 24 Oct 2013 • Roman Garnett, Michael A. Osborne, Philipp Hennig
We propose an active learning method for discovering low-dimensional structure in high-dimensional Gaussian process (GP) tasks.
no code implementations • 21 Oct 2013 • Frank Hutter, Michael A. Osborne
We define a family of kernels for mixed continuous/discrete hierarchical parameter spaces and show that they are positive definite.