1 code implementation • 26 Feb 2024 • Jiacheng Zhu, Kristjan Greenewald, Kimia Nadjahi, Haitz Sáez de Ocáriz Borde, Rickard Brüel Gabrielsson, Leshem Choshen, Marzyeh Ghassemi, Mikhail Yurochkin, Justin Solomon
Specifically, when updating the parameter matrices of a neural network by adding a product $BA$, we observe that the $B$ and $A$ matrices have distinct functions: $A$ extracts features from the input, while $B$ uses these features to create the desired output.
no code implementations • 3 Oct 2023 • Alain Rakotomamonjy, Kimia Nadjahi, Liva Ralaivola
We introduce a principled way of computing the Wasserstein distance between two distributions in a federated manner.
no code implementations • 12 Jun 2023 • Thibault Séjourné, Clément Bonet, Kilian Fatras, Kimia Nadjahi, Nicolas Courty
In parallel, unbalanced OT was designed to allow comparisons of more general positive measures, while being more robust to outliers.
1 code implementation • 7 Jun 2022 • Ruben Ohana, Kimia Nadjahi, Alain Rakotomamonjy, Liva Ralaivola
The Sliced-Wasserstein distance (SW) is a computationally efficient and theoretically grounded alternative to the Wasserstein distance.
2 code implementations • NeurIPS 2021 • Kimia Nadjahi, Alain Durmus, Pierre E. Jacob, Roland Badeau, Umut Şimşekli
The Sliced-Wasserstein distance (SW) is being increasingly used in machine learning applications as an alternative to the Wasserstein distance and offers significant computational and statistical benefits.
1 code implementation • NeurIPS 2020 • Kimia Nadjahi, Alain Durmus, Lénaïc Chizat, Soheil Kolouri, Shahin Shahrampour, Umut Şimşekli
The idea of slicing divergences has been proven to be successful when comparing two probability measures in various machine learning applications including generative modeling, and consists in computing the expected value of a `base divergence' between one-dimensional random projections of the two measures.
no code implementations • 28 Feb 2020 • Soheil Kolouri, Kimia Nadjahi, Umut Simsekli, Shahin Shahrampour
Probability metrics have become an indispensable part of modern statistics and machine learning, and they play a quintessential role in various applications, including statistical hypothesis testing and generative modeling.
1 code implementation • 28 Oct 2019 • Kimia Nadjahi, Valentin De Bortoli, Alain Durmus, Roland Badeau, Umut Şimşekli
Approximate Bayesian Computation (ABC) is a popular method for approximate inference in generative models with intractable but easy-to-sample likelihood.
2 code implementations • 11 Jul 2019 • Kimia Nadjahi, Romain Laroche, Rémi Tachet des Combes
Batch Reinforcement Learning (Batch RL) consists in training a policy using trajectories collected with another policy, called the behavioural policy.
1 code implementation • NeurIPS 2019 • Kimia Nadjahi, Alain Durmus, Umut Şimşekli, Roland Badeau
Minimum expected distance estimation (MEDE) algorithms have been widely used for probabilistic models with intractable likelihood functions and they have become increasingly popular due to their use in implicit generative modeling (e. g. Wasserstein generative adversarial networks, Wasserstein autoencoders).
1 code implementation • NeurIPS 2019 • Soheil Kolouri, Kimia Nadjahi, Umut Simsekli, Roland Badeau, Gustavo K. Rohde
The SW distance, specifically, was shown to have similar properties to the Wasserstein distance, while being much simpler to compute, and is therefore used in various applications including generative modeling and general supervised/unsupervised learning.