Search Results for author: José Miguel Hernández-Lobato

Found 112 papers, 61 papers with code

A Generative Model of Symmetry Transformations

no code implementations4 Mar 2024 James Urquhart Allingham, Bruno Kacper Mlodozeniec, Shreyas Padhy, Javier Antorán, David Krueger, Richard E. Turner, Eric Nalisnick, José Miguel Hernández-Lobato

Correctly capturing the symmetry transformations of data can lead to efficient models with strong generalization capabilities, though methods incorporating symmetries often require prior knowledge.

Diffusive Gibbs Sampling

1 code implementation5 Feb 2024 Wenlin Chen, Mingtian Zhang, Brooks Paige, José Miguel Hernández-Lobato, David Barber

The inadequate mixing of conventional Markov Chain Monte Carlo (MCMC) methods for multi-modal distributions presents a significant challenge in practical applications such as Bayesian inference and molecular dynamics.

Bayesian Inference

Adam through a Second-Order Lens

1 code implementation23 Oct 2023 Ross M. Clarke, Baiyu Su, José Miguel Hernández-Lobato

Research into optimisation for deep learning is characterised by a tension between the computational efficiency of first-order, gradient-based methods (such as SGD and Adam) and the theoretical efficiency of second-order, curvature-based methods (such as quasi-Newton methods and K-FAC).

Computational Efficiency Second-order methods

Series of Hessian-Vector Products for Tractable Saddle-Free Newton Optimisation of Neural Networks

1 code implementation23 Oct 2023 Elre T. Oldewage, Ross M. Clarke, José Miguel Hernández-Lobato

A truncation of this infinite series provides a new optimisation algorithm which is scalable and comparable to other first- and second-order optimisation methods in both runtime and optimisation performance.

Genetic algorithms are strong baselines for molecule generation

1 code implementation13 Oct 2023 Austin Tripp, José Miguel Hernández-Lobato

Generating molecules, both in a directed and undirected fashion, is a huge part of the drug discovery pipeline.

Drug Discovery

Retro-fallback: retrosynthetic planning in an uncertain world

no code implementations13 Oct 2023 Austin Tripp, Krzysztof Maziarz, Sarah Lewis, Marwin Segler, José Miguel Hernández-Lobato

Retrosynthesis is the task of proposing a series of chemical reactions to create a desired molecule from simpler, buyable molecules.

Retrosynthesis

RECOMBINER: Robust and Enhanced Compression with Bayesian Implicit Neural Representations

1 code implementation29 Sep 2023 Jiajun He, Gergely Flamich, Zongyu Guo, José Miguel Hernández-Lobato

COMpression with Bayesian Implicit NEural Representations (COMBINER) is a recent data compression method that addresses a key inefficiency of previous Implicit Neural Representation (INR)-based approaches: it avoids quantization and enables direct optimization of the rate-distortion performance.

Data Compression Quantization

SE(3) Equivariant Augmented Coupling Flows

1 code implementation NeurIPS 2023 Laurence I. Midgley, Vincent Stimper, Javier Antorán, Emile Mathieu, Bernhard Schölkopf, José Miguel Hernández-Lobato

Coupling normalizing flows allow for fast sampling and density evaluation, making them the tool of choice for probabilistic modeling of physical systems.

Minimal Random Code Learning with Mean-KL Parameterization

no code implementations15 Jul 2023 Jihao Andreas Lin, Gergely Flamich, José Miguel Hernández-Lobato

To achieve the desired compression rate, $D_{\mathrm{KL}}[Q_{\mathbf{w}} \Vert P_{\mathbf{w}}]$ must be constrained, which requires a computationally expensive annealing procedure under the conventional mean-variance (Mean-Var) parameterization for $Q_{\mathbf{w}}$.

Online Laplace Model Selection Revisited

no code implementations12 Jul 2023 Jihao Andreas Lin, Javier Antorán, José Miguel Hernández-Lobato

The Laplace approximation provides a closed-form model selection objective for neural networks (NN).

Model Selection

Leveraging Task Structures for Improved Identifiability in Neural Network Representations

no code implementations26 Jun 2023 Wenlin Chen, Julien Horwood, Juyeon Heo, José Miguel Hernández-Lobato

This work extends the theory of identifiability in supervised learning by considering the consequences of having access to a distribution of tasks.

Representation Learning

Sampling from Gaussian Process Posteriors using Stochastic Gradient Descent

1 code implementation NeurIPS 2023 Jihao Andreas Lin, Javier Antorán, Shreyas Padhy, David Janz, José Miguel Hernández-Lobato, Alexander Terenin

Gaussian processes are a powerful framework for quantifying uncertainty and for sequential decision-making but are limited by the requirement of solving linear systems.

Bayesian Optimization Decision Making +1

Compression with Bayesian Implicit Neural Representations

1 code implementation NeurIPS 2023 Zongyu Guo, Gergely Flamich, Jiajun He, Zhibo Chen, José Miguel Hernández-Lobato

Many common types of data can be represented as functions that map coordinates to signal values, such as pixel locations to RGB values in the case of an image.

Quantization

Image Reconstruction via Deep Image Prior Subspaces

1 code implementation20 Feb 2023 Riccardo Barbano, Javier Antorán, Johannes Leuschner, José Miguel Hernández-Lobato, Bangti Jin, Željko Kereta

Deep learning has been widely used for solving image reconstruction tasks but its deployability has been held back due to the shortage of high-quality training data.

Dimensionality Reduction Image Reconstruction +1

Sampling-based inference for large linear models, with application to linearised Laplace

1 code implementation10 Oct 2022 Javier Antorán, Shreyas Padhy, Riccardo Barbano, Eric Nalisnick, David Janz, José Miguel Hernández-Lobato

Large-scale linear models are ubiquitous throughout machine learning, with contemporary application as surrogate models for neural network uncertainty quantification; that is, the linearised Laplace method.

Bayesian Inference Uncertainty Quantification

Flow Annealed Importance Sampling Bootstrap

3 code implementations3 Aug 2022 Laurence Illing Midgley, Vincent Stimper, Gregor N. C. Simm, Bernhard Schölkopf, José Miguel Hernández-Lobato

Normalizing flows are tractable density models that can approximate complicated target distributions, e. g. Boltzmann distributions of physical systems.

Bayesian Experimental Design for Computed Tomography with the Linearised Deep Image Prior

1 code implementation11 Jul 2022 Riccardo Barbano, Johannes Leuschner, Javier Antorán, Bangti Jin, José Miguel Hernández-Lobato

We investigate adaptive design based on a single sparse pilot scan for generating effective scanning strategies for computed tomography reconstruction.

Experimental Design

Adapting the Linearised Laplace Model Evidence for Modern Deep Learning

no code implementations17 Jun 2022 Javier Antorán, David Janz, James Urquhart Allingham, Erik Daxberger, Riccardo Barbano, Eric Nalisnick, José Miguel Hernández-Lobato

The linearised Laplace method for estimating model uncertainty has received renewed attention in the Bayesian deep learning community.

Model Selection

Meta-learning Adaptive Deep Kernel Gaussian Processes for Molecular Property Prediction

1 code implementation5 May 2022 Wenlin Chen, Austin Tripp, José Miguel Hernández-Lobato

We propose Adaptive Deep Kernel Fitting with Implicit Function Theorem (ADKF-IFT), a novel framework for learning deep kernel Gaussian processes (GPs) by interpolating between meta-learning and conventional deep kernel learning.

Bilevel Optimization Drug Discovery +4

Uncertainty Estimation for Computed Tomography with a Linearised Deep Image Prior

2 code implementations28 Feb 2022 Javier Antorán, Riccardo Barbano, Johannes Leuschner, José Miguel Hernández-Lobato, Bangti Jin

Existing deep-learning based tomographic image reconstruction methods do not provide accurate estimates of reconstruction uncertainty, hindering their real-world deployment.

Image Reconstruction

Missing Data Imputation and Acquisition with Deep Hierarchical Models and Hamiltonian Monte Carlo

1 code implementation9 Feb 2022 Ignacio Peis, Chao Ma, José Miguel Hernández-Lobato

Our experiments show that HH-VAEM outperforms existing baselines in the tasks of missing data imputation and supervised learning with missing features.

Active Learning Imputation

A Probabilistic Deep Image Prior over Image Space

no code implementations pproximateinference AABI Symposium 2022 Riccardo Barbano, Javier Antoran, José Miguel Hernández-Lobato, Bangti Jin

The deep image prior regularises under-specified image reconstruction problems by reparametrising the target image as the output of a CNN.

Image Reconstruction

Linearised Laplace Inference in Networks with Normalisation Layers and the Neural g-Prior

no code implementations pproximateinference AABI Symposium 2022 Javier Antoran, James Urquhart Allingham, David Janz, Erik Daxberger, Eric Nalisnick, José Miguel Hernández-Lobato

We show that for neural networks (NN) with normalisation layers, i. e. batch norm, layer norm, or group norm, the Laplace model evidence does not approximate the volume of a posterior mode and is thus unsuitable for model selection.

Image Classification Model Selection +1

Bootstrap Your Flow

1 code implementation pproximateinference AABI Symposium 2022 Laurence Illing Midgley, Vincent Stimper, Gregor N. C. Simm, José Miguel Hernández-Lobato

Normalizing flows are flexible, parameterized distributions that can be used to approximate expectations from intractable distributions via importance sampling.

Normalising Flows

Resampling Base Distributions of Normalizing Flows

1 code implementation29 Oct 2021 Vincent Stimper, Bernhard Schölkopf, José Miguel Hernández-Lobato

Normalizing flows are a popular class of models for approximating probability distributions.

Ranked #47 on Image Generation on CIFAR-10 (bits/dimension metric)

Density Estimation Image Generation

Scalable One-Pass Optimisation of High-Dimensional Weight-Update Hyperparameters by Implicit Differentiation

1 code implementation ICLR 2022 Ross M. Clarke, Elre T. Oldewage, José Miguel Hernández-Lobato

Machine learning training methods depend plentifully and intricately on hyperparameters, motivating automated strategies for their optimisation.

Action-Sufficient State Representation Learning for Control with Structural Constraints

no code implementations12 Oct 2021 Biwei Huang, Chaochao Lu, Liu Leqi, José Miguel Hernández-Lobato, Clark Glymour, Bernhard Schölkopf, Kun Zhang

Perceived signals in real-world scenarios are usually high-dimensional and noisy, and finding and using their representation that contains essential and sufficient information required by downstream decision-making tasks will help improve computational efficiency and generalization ability in the tasks.

Computational Efficiency Decision Making +1

A Fresh Look at De Novo Molecular Design Benchmarks

no code implementations NeurIPS Workshop AI4Scien 2021 Austin Tripp, Gregor N. C. Simm, José Miguel Hernández-Lobato

De novo molecular design is a thriving research area in machine learning (ML) that lacks ubiquitous, high-quality, standardized benchmark tasks.

Improving black-box optimization in VAE latent space using decoder uncertainty

1 code implementation NeurIPS 2021 Pascal Notin, José Miguel Hernández-Lobato, Yarin Gal

Optimization in the latent space of variational autoencoders is a promising approach to generate high-dimensional discrete objects that maximize an expensive black-box property (e. g., drug-likeness in molecular generation, function approximation with arithmetic expressions).

Contextual HyperNetworks for Novel Feature Adaptation

no code implementations12 Apr 2021 Angus Lamb, Evgeny Saveliev, Yingzhen Li, Sebastian Tschiatschek, Camilla Longden, Simon Woodhead, José Miguel Hernández-Lobato, Richard E. Turner, Pashmina Cameron, Cheng Zhang

While deep learning has obtained state-of-the-art results in many applications, the adaptation of neural network architectures to incorporate new output features remains a challenge, as neural networks are commonly trained to produce a fixed output dimension.

Few-Shot Learning Imputation +1

Active Slices for Sliced Stein Discrepancy

1 code implementation5 Feb 2021 Wenbo Gong, Kaibo Zhang, Yingzhen Li, José Miguel Hernández-Lobato

First, we provide theoretical results stating that the requirement of using optimal slicing directions in the kernelized version of SSD can be relaxed, validating the resulting discrepancy with finite random slicing directions.

Invariant Causal Representation Learning

no code implementations1 Jan 2021 Chaochao Lu, Yuhuai Wu, José Miguel Hernández-Lobato, Bernhard Schölkopf

As an alternative, we propose Invariant Causal Representation Learning (ICRL), a learning paradigm that enables out-of-distribution generalization in the nonlinear setting (i. e., nonlinear representations and nonlinear classifiers).

Out-of-Distribution Generalization Representation Learning

Gradient-based tuning of Hamiltonian Monte Carlo hyperparameters

no code implementations1 Jan 2021 Andrew Campbell, Wenlong Chen, Vincent Stimper, José Miguel Hernández-Lobato, Yichuan Zhang

Existing approaches for automating this task either optimise a proxy for mixing speed or consider the HMC chain as an implicit variational distribution and optimize a tractable lower bound that is too loose to be useful in practice.

Barking up the right tree: an approach to search over molecule synthesis DAGs

1 code implementation NeurIPS 2020 John Bradshaw, Brooks Paige, Matt J. Kusner, Marwin H. S. Segler, José Miguel Hernández-Lobato

When designing new molecules with particular properties, it is not only important what to make but crucially how to make it.

Symmetry-Aware Actor-Critic for 3D Molecular Design

1 code implementation ICLR 2021 Gregor N. C. Simm, Robert Pinsler, Gábor Csányi, José Miguel Hernández-Lobato

Automating molecular design using deep reinforcement learning (RL) has the potential to greatly accelerate the search for novel materials.

reinforcement-learning Reinforcement Learning (RL)

Bayesian Deep Learning via Subnetwork Inference

1 code implementation28 Oct 2020 Erik Daxberger, Eric Nalisnick, James Urquhart Allingham, Javier Antorán, José Miguel Hernández-Lobato

In particular, we implement subnetwork linearized Laplace as a simple, scalable Bayesian deep learning method: We first obtain a MAP estimate of all weights and then infer a full-covariance Gaussian posterior over a subnetwork using the linearized Laplace approximation.

Bayesian Inference

Expressive yet Tractable Bayesian Deep Learning via Subnetwork Inference

no code implementations pproximateinference AABI Symposium 2021 Erik Daxberger, Eric Nalisnick, James Allingham, Javier Antoran, José Miguel Hernández-Lobato

In particular, we develop a practical and scalable Bayesian deep learning method that first trains a point estimate, and then infers a full covariance Gaussian posterior approximation over a subnetwork.

Bayesian Inference

Instructions and Guide for Diagnostic Questions: The NeurIPS 2020 Education Challenge

no code implementations23 Jul 2020 Zichao Wang, Angus Lamb, Evgeny Saveliev, Pashmina Cameron, Yordan Zaykov, José Miguel Hernández-Lobato, Richard E. Turner, Richard G. Baraniuk, Craig Barton, Simon Peyton Jones, Simon Woodhead, Cheng Zhang

In this competition, participants will focus on the students' answer records to these multiple-choice diagnostic questions, with the aim of 1) accurately predicting which answers the students provide; 2) accurately predicting which questions have high quality; and 3) determining a personalized sequence of questions for each student that best predicts the student's answers.

Misconceptions Multiple-choice

Sliced Kernelized Stein Discrepancy

1 code implementation ICLR 2021 Wenbo Gong, Yingzhen Li, José Miguel Hernández-Lobato

Kernelized Stein discrepancy (KSD), though being extensively used in goodness-of-fit tests and model learning, suffers from the curse-of-dimensionality.

Predictive Complexity Priors

no code implementations18 Jun 2020 Eric Nalisnick, Jonathan Gordon, José Miguel Hernández-Lobato

For this reason, we propose predictive complexity priors: a functional prior that is defined by comparing the model's predictions to those of a reference model.

Few-Shot Learning

Sample-Efficient Optimization in the Latent Space of Deep Generative Models via Weighted Retraining

1 code implementation NeurIPS 2020 Austin Tripp, Erik Daxberger, José Miguel Hernández-Lobato

We introduce an improved method for efficient black-box optimization, which performs the optimization in the low-dimensional, continuous latent manifold learned by a deep generative model.

Molecular Graph Generation

Depth Uncertainty in Neural Networks

1 code implementation NeurIPS 2020 Javier Antorán, James Urquhart Allingham, José Miguel Hernández-Lobato

Existing methods for estimating uncertainty in deep learning tend to require multiple forward passes, making them unsuitable for applications where computational resources are limited.

Image Classification regression

Variational Depth Search in ResNets

1 code implementation6 Feb 2020 Javier Antorán, James Urquhart Allingham, José Miguel Hernández-Lobato

One-shot neural architecture search allows joint learning of weights and network architecture, reducing computational cost.

Neural Architecture Search

Bayesian Variational Autoencoders for Unsupervised Out-of-Distribution Detection

no code implementations11 Dec 2019 Erik Daxberger, José Miguel Hernández-Lobato

Despite their successes, deep neural networks may make unreliable predictions when faced with test data drawn from a distribution different to that of the training data, constituting a major problem for AI safety.

Out-of-Distribution Detection

Icebreaker: Element-wise Efficient Information Acquisition with a Bayesian Deep Latent Gaussian Model

1 code implementation NeurIPS 2019 Wenbo Gong, Sebastian Tschiatschek, Sebastian Nowozin, Richard E. Turner, José Miguel Hernández-Lobato, Cheng Zhang

In this paper, we address the ice-start problem, i. e., the challenge of deploying machine learning models when only a little or no training data is initially available, and acquiring each feature element of data is associated with costs.

BIG-bench Machine Learning Imputation +1

A Generative Model for Molecular Distance Geometry

1 code implementation ICML 2020 Gregor N. C. Simm, José Miguel Hernández-Lobato

Great computational effort is invested in generating equilibrium states for molecular systems using, for example, Markov chain Monte Carlo.

Refining the variational posterior through iterative optimization

no code implementations25 Sep 2019 Marton Havasi, Jasper Snoek, Dustin Tran, Jonathan Gordon, José Miguel Hernández-Lobato

Variational inference (VI) is a popular approach for approximate Bayesian inference that is particularly promising for highly parameterized models such as deep neural networks.

Bayesian Inference Variational Inference

Compression without Quantization

no code implementations25 Sep 2019 Gergely Flamich, Marton Havasi, José Miguel Hernández-Lobato

Standard compression algorithms work by mapping an image to discrete code using an encoder from which the original image can be reconstructed through a decoder.

Image Compression Quantization

Icebreaker: Element-wise Active Information Acquisition with Bayesian Deep Latent Gaussian Model

1 code implementation13 Aug 2019 Wenbo Gong, Sebastian Tschiatschek, Richard Turner, Sebastian Nowozin, José Miguel Hernández-Lobato, Cheng Zhang

In this paper we introduce the ice-start problem, i. e., the challenge of deploying machine learning models when only little or no training data is initially available, and acquiring each feature element of data is associated with costs.

Active Learning BIG-bench Machine Learning +2

Bayesian Batch Active Learning as Sparse Subset Approximation

2 code implementations NeurIPS 2019 Robert Pinsler, Jonathan Gordon, Eric Nalisnick, José Miguel Hernández-Lobato

Leveraging the wealth of unlabeled data produced in recent years provides great potential for improving supervised models.

Active Learning

'In-Between' Uncertainty in Bayesian Neural Networks

no code implementations27 Jun 2019 Andrew Y. K. Foong, Yingzhen Li, José Miguel Hernández-Lobato, Richard E. Turner

We describe a limitation in the expressiveness of the predictive uncertainty estimate given by mean-field variational inference (MFVI), a popular approximate inference method for Bayesian neural networks.

Active Learning Bayesian Optimisation +1

A Model to Search for Synthesizable Molecules

1 code implementation NeurIPS 2019 John Bradshaw, Brooks Paige, Matt J. Kusner, Marwin H. S. Segler, José Miguel Hernández-Lobato

Deep generative models are able to suggest new organic molecules by generating strings, trees, and graphs representing their structure.

Retrosynthesis valid

A COLD Approach to Generating Optimal Samples

no code implementations23 May 2019 Omar Mahmood, José Miguel Hernández-Lobato

Carrying out global optimisation is difficult as optimisers are likely to follow gradients into regions of the latent space that the model has not been exposed to during training; samples generated from these regions are likely to be too dissimilar to the training data to be useful.

Interpretable Outcome Prediction with Sparse Bayesian Neural Networks in Intensive Care

2 code implementations7 May 2019 Hiske Overweg, Anna-Lena Popkes, Ari Ercole, Yingzhen Li, José Miguel Hernández-Lobato, Yordan Zaykov, Cheng Zhang

However, flexible tools such as artificial neural networks (ANNs) suffer from a lack of interpretability limiting their acceptability to clinicians.

Decision Making feature selection +1

Generating Molecules via Chemical Reactions

no code implementations ICLR Workshop DeepGenStruct 2019 John Bradshaw, Matt J. Kusner, Brooks Paige, Marwin H. S. Segler, José Miguel Hernández-Lobato

We therefore propose a new molecule generation model, mirroring a more realistic real-world process, where reactants are selected and combined to form more complex molecules.

Retrosynthesis valid

Deconfounding Reinforcement Learning in Observational Settings

1 code implementation26 Dec 2018 Chaochao Lu, Bernhard Schölkopf, José Miguel Hernández-Lobato

Using this benchmark, we demonstrate that the proposed algorithms are superior to traditional RL methods in confounded environments with observational data.

OpenAI Gym reinforcement-learning +1

Dropout as a Structured Shrinkage Prior

1 code implementation9 Oct 2018 Eric Nalisnick, José Miguel Hernández-Lobato, Padhraic Smyth

We propose a novel framework for understanding multiplicative noise in neural networks, considering continuous distributions as well as Bernoulli noise (i. e. dropout).

Bayesian Inference

Deterministic Variational Inference for Robust Bayesian Neural Networks

3 code implementations ICLR 2019 Anqi Wu, Sebastian Nowozin, Edward Meeds, Richard E. Turner, José Miguel Hernández-Lobato, Alexander L. Gaunt

We provide two innovations that aim to turn VB into a robust inference tool for Bayesian neural networks: first, we introduce a novel deterministic method to approximate moments in neural networks, eliminating gradient variance; second, we introduce a hierarchical prior for parameters and a novel Empirical Bayes procedure for automatically selecting prior variances.

Variational Inference

Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters

2 code implementations ICLR 2019 Marton Havasi, Robert Peharz, José Miguel Hernández-Lobato

While deep neural networks are a highly successful model class, their large memory footprint puts considerable strain on energy consumption, communication bandwidth, and storage requirements.

Neural Network Compression Quantization

EDDI: Efficient Dynamic Discovery of High-Value Information with Partial VAE

1 code implementation ICLR 2019 Chao Ma, Sebastian Tschiatschek, Konstantina Palla, José Miguel Hernández-Lobato, Sebastian Nowozin, Cheng Zhang

Many real-life decision-making situations allow further relevant information to be acquired at a specific cost, for example, in assessing the health status of a patient we may decide to take additional measurements such as diagnostic tests or imaging scans before making a final assessment.

Decision Making Experimental Design +1

Ergodic Measure Preserving Flows

no code implementations27 Sep 2018 Yichuan Zhang, José Miguel Hernández-Lobato, Zoubin Ghahramani

Training probabilistic models with neural network components is intractable in most cases and requires to use approximations such as Markov chain Monte Carlo (MCMC), which is not scalable and requires significant hyper-parameter tuning, or mean-field variational inference (VI), which is biased.

Variational Inference

Meta-Learning for Stochastic Gradient MCMC

1 code implementation ICLR 2019 Wenbo Gong, Yingzhen Li, José Miguel Hernández-Lobato

Stochastic gradient Markov chain Monte Carlo (SG-MCMC) has become increasingly popular for simulating posterior samples in large-scale Bayesian modeling.

Efficient Exploration Meta-Learning +1

Variational Implicit Processes

1 code implementation6 Jun 2018 Chao Ma, Yingzhen Li, José Miguel Hernández-Lobato

We introduce the implicit processes (IPs), a stochastic process that places implicitly defined multivariate distributions over any finite collections of random variables.

Gaussian Processes Stochastic Optimization

Ergodic Inference: Accelerate Convergence by Optimisation

no code implementations25 May 2018 Yichuan Zhang, José Miguel Hernández-Lobato

In this work, we aim to improve upon MCMC and VI by a novel hybrid method based on the idea of reducing simulation bias of finite-length MCMC chains using gradient-based optimisation.

Computational Efficiency Variational Inference

Taking gradients through experiments: LSTMs and memory proximal policy optimization for black-box quantum control

no code implementations12 Feb 2018 Moritz August, José Miguel Hernández-Lobato

In this work we introduce the application of black-box quantum control as an interesting rein- forcement learning problem to the machine learning community.

Deep Gaussian Processes with Decoupled Inducing Inputs

no code implementations9 Jan 2018 Marton Havasi, José Miguel Hernández-Lobato, Juan José Murillo-Fuentes

Deep Gaussian Processes (DGP) are hierarchical generalizations of Gaussian Processes (GP) that have proven to work effectively on a multiple supervised regression tasks.

Gaussian Processes

Sensitivity Analysis for Predictive Uncertainty in Bayesian Neural Networks

no code implementations10 Dec 2017 Stefan Depeweg, José Miguel Hernández-Lobato, Steffen Udluft, Thomas Runkler

We derive a novel sensitivity analysis of input variables for predictive epistemic and aleatoric uncertainty.

Learning a Generative Model for Validity in Complex Discrete Structures

1 code implementation ICLR 2018 David Janz, Jos van der Westhuizen, Brooks Paige, Matt J. Kusner, José Miguel Hernández-Lobato

This validator provides insight as to how individual sequence elements influence the validity of the overall sequence, and can be used to constrain sequence based models to generate valid sequences -- and thus faithfully model discrete objects.

valid

Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning

1 code implementation ICML 2018 Stefan Depeweg, José Miguel Hernández-Lobato, Finale Doshi-Velez, Steffen Udluft

Bayesian neural networks with latent variables are scalable and flexible probabilistic models: They account for uncertainty in the estimation of the network weights and, by making use of latent variables, can capture complex noise patterns in the data.

Active Learning Decision Making +2

Constrained Bayesian Optimization for Automatic Chemical Design

1 code implementation16 Sep 2017 Ryan-Rhys Griffiths, José Miguel Hernández-Lobato

Automatic Chemical Design is a framework for generating novel molecules with optimized properties.

Bayesian Optimization

Bayesian Semisupervised Learning with Deep Generative Models

no code implementations29 Jun 2017 Jonathan Gordon, José Miguel Hernández-Lobato

However, these techniques a) cannot account for model uncertainty in the estimation of the model's discriminative component and b) lack flexibility to capture complex stochastic patterns in the label generation process.

Active Learning Missing Labels

Uncertainty Decomposition in Bayesian Neural Networks with Latent Variables

no code implementations26 Jun 2017 Stefan Depeweg, José Miguel Hernández-Lobato, Finale Doshi-Velez, Steffen Udluft

Bayesian neural networks (BNNs) with latent variables are probabilistic models which can automatically identify complex stochastic patterns in the data.

Active Learning reinforcement-learning +2

GANS for Sequences of Discrete Elements with the Gumbel-softmax Distribution

no code implementations12 Nov 2016 Matt J. Kusner, José Miguel Hernández-Lobato

Generative Adversarial Networks (GAN) have limitations when the goal is to generate sequences of discrete elements.

Sequence Tutor: Conservative Fine-Tuning of Sequence Generation Models with KL-control

no code implementations ICML 2017 Natasha Jaques, Shixiang Gu, Dzmitry Bahdanau, José Miguel Hernández-Lobato, Richard E. Turner, Douglas Eck

This paper proposes a general method for improving the structure and quality of sequences generated by a recurrent neural network (RNN), while maintaining information originally learned from data, as well as sample diversity.

Reinforcement Learning (RL)

Learning and Policy Search in Stochastic Dynamical Systems with Bayesian Neural Networks

2 code implementations23 May 2016 Stefan Depeweg, José Miguel Hernández-Lobato, Finale Doshi-Velez, Steffen Udluft

We present an algorithm for model-based reinforcement learning that combines Bayesian neural networks (BNNs) with random roll-outs and stochastic optimization for policy learning.

Model-based Reinforcement Learning reinforcement-learning +2

Deep Gaussian Processes for Regression using Approximate Expectation Propagation

no code implementations12 Feb 2016 Thang D. Bui, Daniel Hernández-Lobato, Yingzhen Li, José Miguel Hernández-Lobato, Richard E. Turner

Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (GPs) and are formally equivalent to neural networks with multiple, infinitely wide hidden layers.

Gaussian Processes regression

A General Framework for Constrained Bayesian Optimization using Information-based Search

1 code implementation30 Nov 2015 José Miguel Hernández-Lobato, Michael A. Gelbart, Ryan P. Adams, Matthew W. Hoffman, Zoubin Ghahramani

Of particular interest to us is to efficiently solve problems with decoupled constraints, in which subsets of the objective and constraint functions may be evaluated independently.

Bayesian Optimization

Predictive Entropy Search for Multi-objective Bayesian Optimization

no code implementations17 Nov 2015 Daniel Hernández-Lobato, José Miguel Hernández-Lobato, Amar Shah, Ryan P. Adams

The results show that PESMO produces better recommendations with a smaller number of evaluations of the objectives, and that a decoupled evaluation can lead to improvements in performance, particularly when the number of objectives is large.

Bayesian Optimization

Training Deep Gaussian Processes using Stochastic Expectation Propagation and Probabilistic Backpropagation

no code implementations11 Nov 2015 Thang D. Bui, José Miguel Hernández-Lobato, Yingzhen Li, Daniel Hernández-Lobato, Richard E. Turner

Deep Gaussian processes (DGPs) are multi-layer hierarchical generalisations of Gaussian processes (GPs) and are formally equivalent to neural networks with multiple, infinitely wide hidden layers.

Gaussian Processes

Black-box $α$-divergence Minimization

3 code implementations10 Nov 2015 José Miguel Hernández-Lobato, Yingzhen Li, Mark Rowland, Daniel Hernández-Lobato, Thang Bui, Richard E. Turner

Black-box alpha (BB-$\alpha$) is a new approximate inference method based on the minimization of $\alpha$-divergences.

General Classification regression

Scalable Gaussian Process Classification via Expectation Propagation

no code implementations16 Jul 2015 Daniel Hernández-Lobato, José Miguel Hernández-Lobato

Variational methods have been recently considered for scaling the training process of Gaussian process classifiers to large datasets.

Classification General Classification

Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks

3 code implementations18 Feb 2015 José Miguel Hernández-Lobato, Ryan P. Adams

In principle, the Bayesian approach to learning neural networks does not have these problems.

Learning Feature Selection Dependencies in Multi-task Learning

no code implementations NeurIPS 2013 Daniel Hernández-Lobato, José Miguel Hernández-Lobato

Because the process of estimating feature selection dependencies may suffer from over-fitting in the model proposed, additional data from a multi-task learning scenario are considered for induction.

feature selection Multi-Task Learning

Dynamic Covariance Models for Multivariate Financial Time Series

no code implementations18 May 2013 Yue Wu, José Miguel Hernández-Lobato, Zoubin Ghahramani

The accurate prediction of time-changing covariances is an important problem in the modeling of multivariate financial data.

Time Series Time Series Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.