no code implementations • 25 Mar 2024 • Borja Rodríguez-Gálvez, Omar Rivasplata, Ragnar Thobaben, Mikael Skoglund
Moreover, the paper derives a high-probability PAC-Bayes bound for losses with a bounded variance.
no code implementations • 10 Dec 2023 • Sokhna Diarra Mbacke, Omar Rivasplata
Diffusion models are one of the most important families of deep generative models.
no code implementations • 15 Sep 2022 • Gholamali Aminian, Armin Behnamnia, Roberto Vega, Laura Toni, Chengchun Shi, Hamid R. Rabiee, Omar Rivasplata, Miguel R. D. Rodrigues
We propose learning methods for problems where feedback is missing for some samples, so there are samples with feedback and samples missing-feedback in the logged data.
no code implementations • 15 Nov 2021 • Maria Perez-Ortiz, Omar Rivasplata, Emilio Parrado-Hernandez, Benjamin Guedj, John Shawe-Taylor
We then show that in data starvation regimes, holding out data for the test set bounds adversely affects generalisation performance, while self-certified strategies based on PAC-Bayes bounds do not suffer from this drawback, proving that they might be a suitable choice for the small data regime.
no code implementations • 21 Sep 2021 • Maria Perez-Ortiz, Omar Rivasplata, Benjamin Guedj, Matthew Gleeson, Jingyu Zhang, John Shawe-Taylor, Miroslaw Bober, Josef Kittler
We experiment on 6 datasets with different strategies and amounts of data to learn data-dependent PAC-Bayes priors, and we compare them in terms of their effect on test performance of the learnt predictors and tightness of their risk certificate.
no code implementations • NeurIPS 2021 • Ilja Kuzborskij, Csaba Szepesvári, Omar Rivasplata, Amal Rannen-Triki, Razvan Pascanu
Empirically it has been observed that the performance of deep neural networks steadily improves as we increase model size, contradicting the classical view on overfitting and generalization.
no code implementations • 12 Jan 2021 • Omar Rivasplata
In an interesting recent work, Kuzborskij and Szepesv\'ari derived a confidence bound for functions of independent random variables, which is based on an inequality that relates concentration to squared perturbations of the chosen function.
1 code implementation • 25 Jul 2020 • María Pérez-Ortiz, Omar Rivasplata, John Shawe-Taylor, Csaba Szepesvári
In the context of probabilistic neural networks, the output of training is a probability distribution over network weights.
no code implementations • NeurIPS 2020 • Omar Rivasplata, Ilja Kuzborskij, Csaba Szepesvari, John Shawe-Taylor
Specifically, we present a basic PAC-Bayes inequality for stochastic kernels, from which one may derive extensions of various known PAC-Bayes bounds as well as novel bounds.
no code implementations • NeurIPS 2020 • Laurent Orseau, Marcus Hutter, Omar Rivasplata
The Lottery Ticket Hypothesis is a conjecture that every large neural network contains a subnetwork that, when trained in isolation, achieves comparable performance to the large network.
no code implementations • 12 Jun 2020 • Maxime Haddouche, Benjamin Guedj, Omar Rivasplata, John Shawe-Taylor
We present new PAC-Bayesian generalisation bounds for learning problems with unbounded loss functions.
no code implementations • 19 Aug 2019 • Omar Rivasplata, Vikram M Tankasali, Csaba Szepesvari
We explore the family of methods "PAC-Bayes with Backprop" (PBB) to train probabilistic neural networks by minimizing PAC-Bayes bounds.
no code implementations • NeurIPS 2018 • Omar Rivasplata, Emilio Parrado-Hernandez, John Shawe-Taylor, Shiliang Sun, Csaba Szepesvari
Our main result estimates the risk of the randomized algorithm in terms of the hypothesis stability coefficients.