no code implementations • 20 Jan 2023 • Simon Buchholz, Jonas M. Kübler, Bernhard Schölkopf
Here we introduce further bandit models where we only have limited access to the randomness of the rewards, but we can still query the arms in superposition.
3 code implementations • 17 Jun 2022 • Jonas M. Kübler, Vincent Stimper, Simon Buchholz, Krikamol Muandet, Bernhard Schölkopf
Two-sample tests are important in statistics and machine learning, both as tools for scientific discovery as well as to detect distribution shifts.
1 code implementation • 2 Feb 2022 • Luigi Gresele, Julius von Kügelgen, Jonas M. Kübler, Elke Kirschbaum, Bernhard Schölkopf, Dominik Janzing
We introduce an approach to counterfactual inference based on merging information from multiple datasets.
1 code implementation • 25 Oct 2021 • Sofiene Jerbi, Lukas J. Fiderer, Hendrik Poulsen Nautrup, Jonas M. Kübler, Hans J. Briegel, Vedran Dunjko
In this work, we identify a constructive framework that captures all standard models based on parametrized quantum circuits: that of linear quantum models.
1 code implementation • NeurIPS 2021 • Jonas M. Kübler, Simon Buchholz, Bernhard Schölkopf
Quantum computers offer the possibility to efficiently compute inner products of exponentially large density operators that are classically hard to compute.
1 code implementation • 10 Feb 2021 • Jonas M. Kübler, Wittawat Jitkrittum, Bernhard Schölkopf, Krikamol Muandet
That is, the test set is used to simultaneously estimate the expectations and define the basis points, while the training set only serves to select the kernel and is discarded.
1 code implementation • NeurIPS 2020 • Jonas M. Kübler, Wittawat Jitkrittum, Bernhard Schölkopf, Krikamol Muandet
Modern large-scale kernel-based tests such as maximum mean discrepancy (MMD) and kernelized Stein discrepancy (KSD) optimize kernel hyperparameters on a held-out sample via data splitting to obtain the most powerful test statistics.
no code implementations • 31 May 2019 • Jonas M. Kübler, Krikamol Muandet, Bernhard Schölkopf
The kernel mean embedding of probability distributions is commonly used in machine learning as an injective mapping from distributions to functions in an infinite dimensional Hilbert space.