no code implementations • 18 Mar 2024 • Timothée Ly, Julien Ferry, Marie-José Huguet, Sébastien Gambs, Ulrich Aivodji
Differentially-private (DP) mechanisms can be embedded into the design of a machine learningalgorithm to protect the resulting model against privacy leakage, although this often comes with asignificant loss of accuracy.
no code implementations • 22 Dec 2023 • Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala
Machine learning techniques are increasingly used for high-stakes decision-making, such as college admissions, loan attribution or recidivism prediction.
no code implementations • 29 Aug 2023 • Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala
In addition, we demonstrate that under realistic assumptions regarding the interpretable models' structure, the uncertainty of the reconstruction can be computed efficiently.
no code implementations • 11 Apr 2023 • Julien Rouzot, Julien Ferry, Marie-José Huguet
In this paper, we use Mixed-Integer Linear Programming (MILP) techniques to produce inherently interpretable scoring systems under sparsity and fairness constraints, for the general multi-class classification setup.
no code implementations • 2 Sep 2022 • Julien Ferry, Ulrich Aïvodji, Sébastien Gambs, Marie-José Huguet, Mohamed Siala
More precisely, we propose a generic reconstruction correction method, which takes as input an initial guess made by the adversary and corrects it to comply with some user-defined constraints (such as the fairness information) while minimizing the changes in the adversary's guess.
no code implementations • 21 Mar 2022 • Hao Hu, Marie-José Huguet, Mohamed Siala
Then, we lift the encoding to a MaxSAT model to learn optimal BDDs in limited depths, that maximize the number of examples correctly classified.
1 code implementation • 9 Sep 2019 • Ulrich Aïvodji, Julien Ferry, Sébastien Gambs, Marie-José Huguet, Mohamed Siala
While it has been shown that interpretable models can be as accurate as black-box models in several critical domains, existing fair classification techniques that are interpretable by design often display poor accuracy/fairness tradeoffs in comparison with their non-interpretable counterparts.
no code implementations • 23 Jan 2017 • Emmanuel Hébrard, Marie-José Huguet, Daniel Veysseire, Ludivine Sauvan, Bertrand Cabon
This can be modeled as packing the tests into configurations, and we introduce a set of implied constraints to improve the lower bound of the model.