no code implementations • 19 Feb 2024 • Paul Krzakala, Junjie Yang, Rémi Flamary, Florence d'Alché-Buc, Charlotte Laclau, Matthieu Labeau
We present a novel end-to-end deep learning-based approach for Supervised Graph Prediction (SGP).
no code implementations • 12 Jan 2024 • Thibaud Leteno, Antoine Gourru, Charlotte Laclau, Christophe Gravier
In this paper, we propose an empirical exploration of this problem by formalizing two questions: (1) Can we identify the neural mechanism(s) responsible for gender bias in BERT (and by extension DistilBERT)?
1 code implementation • 21 Nov 2023 • Thibaud Leteno, Antoine Gourru, Charlotte Laclau, Rémi Emonet, Christophe Gravier
This is more suitable for real-life scenarios compared to existing methods that require annotations of sensitive attributes at train time.
no code implementations • 17 Oct 2023 • Quentin Bouniot, Ievgen Redko, Anton Mallasto, Charlotte Laclau, Karol Arndt, Oliver Struckmeier, Markus Heinonen, Ville Kyrki, Samuel Kaski
The remarkable success of deep neural networks (DNN) is often attributed to their high expressive power and their ability to approximate functions of arbitrary complexity.
no code implementations • 19 Mar 2023 • Vincent Brault, Émilie Devijver, Charlotte Laclau
In this paper we consider functional data with heterogeneity in time and in population.
2 code implementations • 11 May 2022 • Charlotte Laclau, Christine Largeron, Manvi Choudhary
In that context, algorithmic contributions for graph mining are not spared by the problem of fairness and present some specific challenges related to the intrinsic nature of graphs: (1) graph data is non-IID, and this assumption may invalidate many existing studies in fair machine learning, (2) suited metric definitions to assess the different types of fairness with relational data and (3) algorithmic challenge on the difficulty of finding a good trade-off between model accuracy and fairness.
1 code implementation • 26 Feb 2022 • Aleksandra Burashnikova, Yury Maximov, Marianne Clausel, Charlotte Laclau, Franck Iutzeler, Massih-Reza Amini
This paper is an extended version of [Burashnikova et al., 2021, arXiv: 2012. 06910], where we proposed a theoretically supported sequential strategy for training a large-scale Recommender System (RS) over implicit feedback, mainly in the form of clicks.
1 code implementation • 12 Dec 2020 • Aleksandra Burashnikova, Marianne Clausel, Charlotte Laclau, Frack Iutzeller, Yury Maximov, Massih-Reza Amini
In this paper, we propose a theoretically founded sequential strategy for training large-scale Recommender Systems (RS) over implicit feedback, mainly in the form of clicks.
no code implementations • 30 Oct 2020 • Charlotte Laclau, Ievgen Redko, Manvi Choudhary, Christine Largeron
Machine learning and data mining algorithms have been increasingly used recently to support decision-making systems in many areas of high societal importance such as healthcare, education, or security.
no code implementations • 21 Oct 2020 • Nina Vesseron, Ievgen Redko, Charlotte Laclau
The theoretical analysis of deep neural networks (DNN) is arguably among the most challenging research directions in machine learning (ML) right now, as it requires from scientists to lay novel statistical learning foundations to explain their behaviour in practice.
no code implementations • 1 Sep 2020 • Charlotte Laclau, Franck Iutzeler, Ievgen Redko
In this paper, we introduce and formalize a rank-one partitioning learning paradigm that unifies partitioning methods that proceed by summarizing a data set using a single vector that is further used to derive the final clustering partition.
1 code implementation • 11 May 2018 • Georgios Balikas, Charlotte Laclau, Ievgen Redko, Massih-Reza Amini
Many information retrieval algorithms rely on the notion of a good distance that allows to efficiently compare objects of different nature.
no code implementations • ICML 2017 • Charlotte Laclau, Ievgen Redko, Basarab Matei, Younès Bennani, Vincent Brault
The proposed method uses the entropy regularized optimal transport between empirical measures defined on data instances and features in order to obtain an estimated joint probability density function represented by the optimal coupling matrix.
1 code implementation • 29 Apr 2017 • Sumit Sidana, Mikhail Trofimov, Oleg Horodnitskii, Charlotte Laclau, Yury Maximov, Massih-Reza Amini
The learning objective is based on three scenarios of ranking losses that control the ability of the model to maintain the ordering over the items induced from the users' preferences, as well as, the capacity of the dot-product defined in the learned embedded space to produce the ordering.