Search Results for author: Kimia Nadjahi

Found 11 papers, 8 papers with code

Asymmetry in Low-Rank Adapters of Foundation Models

1 code implementation26 Feb 2024 Jiacheng Zhu, Kristjan Greenewald, Kimia Nadjahi, Haitz Sáez de Ocáriz Borde, Rickard Brüel Gabrielsson, Leshem Choshen, Marzyeh Ghassemi, Mikhail Yurochkin, Justin Solomon

Specifically, when updating the parameter matrices of a neural network by adding a product $BA$, we observe that the $B$ and $A$ matrices have distinct functions: $A$ extracts features from the input, while $B$ uses these features to create the desired output.

Federated Wasserstein Distance

no code implementations3 Oct 2023 Alain Rakotomamonjy, Kimia Nadjahi, Liva Ralaivola

We introduce a principled way of computing the Wasserstein distance between two distributions in a federated manner.

Federated Learning

Unbalanced Optimal Transport meets Sliced-Wasserstein

no code implementations12 Jun 2023 Thibault Séjourné, Clément Bonet, Kilian Fatras, Kimia Nadjahi, Nicolas Courty

In parallel, unbalanced OT was designed to allow comparisons of more general positive measures, while being more robust to outliers.

Shedding a PAC-Bayesian Light on Adaptive Sliced-Wasserstein Distances

1 code implementation7 Jun 2022 Ruben Ohana, Kimia Nadjahi, Alain Rakotomamonjy, Liva Ralaivola

The Sliced-Wasserstein distance (SW) is a computationally efficient and theoretically grounded alternative to the Wasserstein distance.

Generalization Bounds

Fast Approximation of the Sliced-Wasserstein Distance Using Concentration of Random Projections

2 code implementations NeurIPS 2021 Kimia Nadjahi, Alain Durmus, Pierre E. Jacob, Roland Badeau, Umut Şimşekli

The Sliced-Wasserstein distance (SW) is being increasingly used in machine learning applications as an alternative to the Wasserstein distance and offers significant computational and statistical benefits.

Statistical and Topological Properties of Sliced Probability Divergences

1 code implementation NeurIPS 2020 Kimia Nadjahi, Alain Durmus, Lénaïc Chizat, Soheil Kolouri, Shahin Shahrampour, Umut Şimşekli

The idea of slicing divergences has been proven to be successful when comparing two probability measures in various machine learning applications including generative modeling, and consists in computing the expected value of a `base divergence' between one-dimensional random projections of the two measures.

Generalized Sliced Distances for Probability Distributions

no code implementations28 Feb 2020 Soheil Kolouri, Kimia Nadjahi, Umut Simsekli, Shahin Shahrampour

Probability metrics have become an indispensable part of modern statistics and machine learning, and they play a quintessential role in various applications, including statistical hypothesis testing and generative modeling.

Two-sample testing

Approximate Bayesian Computation with the Sliced-Wasserstein Distance

1 code implementation28 Oct 2019 Kimia Nadjahi, Valentin De Bortoli, Alain Durmus, Roland Badeau, Umut Şimşekli

Approximate Bayesian Computation (ABC) is a popular method for approximate inference in generative models with intractable but easy-to-sample likelihood.

Image Denoising

Safe Policy Improvement with Soft Baseline Bootstrapping

2 code implementations11 Jul 2019 Kimia Nadjahi, Romain Laroche, Rémi Tachet des Combes

Batch Reinforcement Learning (Batch RL) consists in training a policy using trajectories collected with another policy, called the behavioural policy.

Asymptotic Guarantees for Learning Generative Models with the Sliced-Wasserstein Distance

1 code implementation NeurIPS 2019 Kimia Nadjahi, Alain Durmus, Umut Şimşekli, Roland Badeau

Minimum expected distance estimation (MEDE) algorithms have been widely used for probabilistic models with intractable likelihood functions and they have become increasingly popular due to their use in implicit generative modeling (e. g. Wasserstein generative adversarial networks, Wasserstein autoencoders).

Generalized Sliced Wasserstein Distances

1 code implementation NeurIPS 2019 Soheil Kolouri, Kimia Nadjahi, Umut Simsekli, Roland Badeau, Gustavo K. Rohde

The SW distance, specifically, was shown to have similar properties to the Wasserstein distance, while being much simpler to compute, and is therefore used in various applications including generative modeling and general supervised/unsupervised learning.

Cannot find the paper you are looking for? You can Submit a new open access paper.