no code implementations • 24 Apr 2024 • Zekai Chen, Weeden Daniel, Po-Yu Chen, Francois Buet-Golfouse
The advent of personalized content generation by LLMs presents a novel challenge: how to efficiently adapt text to meet individual preferences without the unsustainable demand of creating a unique model for each user.
no code implementations • 29 Sep 2021 • Francois Buet-Golfouse
Previous literature has shown that bias mitigating algorithms were sometimes prone to overfitting and had poor out-of-sample generalisation.
no code implementations • 4 Nov 2020 • Ashrya Agrawal, Florian Pfisterer, Bernd Bischl, Francois Buet-Golfouse, Srijan Sood, Jiahao Chen, Sameena Shah, Sebastian Vollmer
We present an empirical study of debiasing methods for classifiers, showing that debiasers often fail in practice to generalize out-of-sample, and can in fact make fairness worse rather than better.