no code implementations • 22 Feb 2024 • Imad Aouali, Victor-Emmanuel Brunel, David Rohde, Anna Korba
In this framework, we propose sDM, a generic Bayesian approach designed for OPE and OPL, grounded in both algorithmic and theoretical foundations.
no code implementations • 8 Feb 2024 • Pierre Marion, Anna Korba, Peter Bartlett, Mathieu Blondel, Valentin De Bortoli, Arnaud Doucet, Felipe Llinares-López, Courtney Paquette, Quentin Berthet
We present a new algorithm to optimize distributions defined implicitly by parameterized stochastic diffusions.
no code implementations • 18 Oct 2023 • Nicolas Chopin, Francesca R. Crucinio, Anna Korba
We establish that tempering SMC corresponds to entropic mirror descent applied to the reverse Kullback-Leibler (KL) divergence and obtain convergence rates for the tempering iterates.
no code implementations • 25 May 2023 • Imad Aouali, Victor-Emmanuel Brunel, David Rohde, Anna Korba
In particular, it is also valid for standard IPS without making the assumption that the importance weights are bounded.
2 code implementations • 24 Oct 2022 • Lingxiao Li, Qiang Liu, Anna Korba, Mikhail Yurochkin, Justin Solomon
These energies rely on mollifier functions -- smooth approximations of the Dirac delta originated from PDE theory.
1 code implementation • 8 Jul 2022 • Tom Huix, Szymon Majewski, Alain Durmus, Eric Moulines, Anna Korba
This paper studies the Variational Inference (VI) used for training Bayesian Neural Networks (BNN) in the overparameterized regime, i. e., when the number of neurons tends to infinity.
no code implementations • 17 Jun 2022 • Pierre-Cyril Aubin-Frankowski, Anna Korba, Flavien Léger
We also show that Expectation Maximization (EM) can always formally be written as a mirror descent.
no code implementations • 29 Oct 2021 • Anna Korba, François Portier
Adaptive importance sampling is a widely spread Monte Carlo technique that uses a re-weighting strategy to iteratively estimate the so-called target distribution.
2 code implementations • 20 May 2021 • Anna Korba, Pierre-Cyril Aubin-Frankowski, Szymon Majewski, Pierre Ablin
We investigate the properties of its Wasserstein gradient flow to approximate a target probability distribution $\pi$ on $\mathbb{R}^d$, known up to a normalization constant.
2 code implementations • 10 May 2021 • Afsaneh Mastouri, Yuchen Zhu, Limor Gultchin, Anna Korba, Ricardo Silva, Matt J. Kusner, Arthur Gretton, Krikamol Muandet
In particular, we provide a unifying view of two-stage and moment restriction approaches for solving this problem in a nonlinear setting.
no code implementations • NeurIPS 2020 • Anna Korba, Adil Salim, Michael Arbel, Giulia Luise, Arthur Gretton
We study the Stein Variational Gradient Descent (SVGD) algorithm, which optimises a set of particles to approximate a target probability distribution $\pi\propto e^{-V}$ on $\mathbb{R}^d$.
no code implementations • NeurIPS 2020 • Adil Salim, Anna Korba, Giulia Luise
Using techniques from convex optimization and optimal transport, we analyze the FB scheme as a minimization algorithm on the Wasserstein space.
1 code implementation • NeurIPS 2019 • Michael Arbel, Anna Korba, Adil Salim, Arthur Gretton
We construct a Wasserstein gradient flow of the maximum mean discrepancy (MMD) and study its convergence properties.
1 code implementation • 15 Oct 2018 • Mastane Achab, Anna Korba, Stephan Clémençon
Whereas most dimensionality reduction techniques (e. g. PCA, ICA, NMF) for multivariate data essentially rely on linear algebra to a certain extent, summarizing ranking data, viewed as realizations of a random permutation $\Sigma$ on a set of items indexed by $i\in \{1,\ldots,\; n\}$, is a great statistical challenge, due to the absence of vector space structure for the set of permutations $\mathfrak{S}_n$.
1 code implementation • NeurIPS 2018 • Anna Korba, Alexandre Garcia, Florence d'Alché Buc
We propose to solve a label ranking problem as a structured output regression task.
no code implementations • 31 Oct 2017 • Stephan Clémençon, Anna Korba, Eric Sibony
In the probabilistic formulation of the 'Learning to Order' problem we propose, which extends the framework for statistical Kemeny ranking aggregation developped in \citet{CKS17}, this boils down to recovering conditional Kemeny medians of $\Sigma$ given $X$ from i. i. d.