Search Results for author: Omar Rivasplata

Found 13 papers, 1 papers with code

A Note on the Convergence of Denoising Diffusion Probabilistic Models

no code implementations10 Dec 2023 Sokhna Diarra Mbacke, Omar Rivasplata

Diffusion models are one of the most important families of deep generative models.

Denoising

Semi-supervised Batch Learning From Logged Data

no code implementations15 Sep 2022 Gholamali Aminian, Armin Behnamnia, Roberto Vega, Laura Toni, Chengchun Shi, Hamid R. Rabiee, Omar Rivasplata, Miguel R. D. Rodrigues

We propose learning methods for problems where feedback is missing for some samples, so there are samples with feedback and samples missing-feedback in the logged data.

counterfactual

Progress in Self-Certified Neural Networks

no code implementations15 Nov 2021 Maria Perez-Ortiz, Omar Rivasplata, Emilio Parrado-Hernandez, Benjamin Guedj, John Shawe-Taylor

We then show that in data starvation regimes, holding out data for the test set bounds adversely affects generalisation performance, while self-certified strategies based on PAC-Bayes bounds do not suffer from this drawback, proving that they might be a suitable choice for the small data regime.

valid

Learning PAC-Bayes Priors for Probabilistic Neural Networks

no code implementations21 Sep 2021 Maria Perez-Ortiz, Omar Rivasplata, Benjamin Guedj, Matthew Gleeson, Jingyu Zhang, John Shawe-Taylor, Miroslaw Bober, Josef Kittler

We experiment on 6 datasets with different strategies and amounts of data to learn data-dependent PAC-Bayes priors, and we compare them in terms of their effect on test performance of the learnt predictors and tightness of their risk certificate.

On the Role of Optimization in Double Descent: A Least Squares Study

no code implementations NeurIPS 2021 Ilja Kuzborskij, Csaba Szepesvári, Omar Rivasplata, Amal Rannen-Triki, Razvan Pascanu

Empirically it has been observed that the performance of deep neural networks steadily improves as we increase model size, contradicting the classical view on overfitting and generalization.

A note on a confidence bound of Kuzborskij and Szepesvári

no code implementations12 Jan 2021 Omar Rivasplata

In an interesting recent work, Kuzborskij and Szepesv\'ari derived a confidence bound for functions of independent random variables, which is based on an inequality that relates concentration to squared perturbations of the chosen function.

Tighter risk certificates for neural networks

1 code implementation25 Jul 2020 María Pérez-Ortiz, Omar Rivasplata, John Shawe-Taylor, Csaba Szepesvári

In the context of probabilistic neural networks, the output of training is a probability distribution over network weights.

Model Selection

PAC-Bayes Analysis Beyond the Usual Bounds

no code implementations NeurIPS 2020 Omar Rivasplata, Ilja Kuzborskij, Csaba Szepesvari, John Shawe-Taylor

Specifically, we present a basic PAC-Bayes inequality for stochastic kernels, from which one may derive extensions of various known PAC-Bayes bounds as well as novel bounds.

valid

Logarithmic Pruning is All You Need

no code implementations NeurIPS 2020 Laurent Orseau, Marcus Hutter, Omar Rivasplata

The Lottery Ticket Hypothesis is a conjecture that every large neural network contains a subnetwork that, when trained in isolation, achieves comparable performance to the large network.

PAC-Bayes unleashed: generalisation bounds with unbounded losses

no code implementations12 Jun 2020 Maxime Haddouche, Benjamin Guedj, Omar Rivasplata, John Shawe-Taylor

We present new PAC-Bayesian generalisation bounds for learning problems with unbounded loss functions.

regression

PAC-Bayes with Backprop

no code implementations19 Aug 2019 Omar Rivasplata, Vikram M Tankasali, Csaba Szepesvari

We explore the family of methods "PAC-Bayes with Backprop" (PBB) to train probabilistic neural networks by minimizing PAC-Bayes bounds.

Cannot find the paper you are looking for? You can Submit a new open access paper.