1 code implementation • 27 Mar 2024 • Philip Kenneweg, Sarah Schröder, Alexander Schulz, Barbara Hammer
It is problematic that most debiasing approaches are directly transferred from word embeddings, therefore these approaches fail to take into account the nonlinear nature of sentence embedders and the embeddings they produce.
1 code implementation • 27 Mar 2024 • Philip Kenneweg, Sarah Schröder, Barbara Hammer
Pre training of language models on large text corpora is common practice in Natural Language Processing.
1 code implementation • 27 Mar 2024 • Philip Kenneweg, Alexander Schulz, Sarah Schröder, Barbara Hammer
We combine the learning rate distributions thus found and show that they generalize to better performance with respect to the problem of catastrophic forgetting.
no code implementations • 27 Jan 2024 • Sarah Schröder, Alexander Schulz, Fabian Hinder, Barbara Hammer
Furthermore, we formally analyze cosine based scores from the literature with regard to these requirements.
no code implementations • 28 Mar 2022 • Sarah Schröder, Alexander Schulz, Philip Kenneweg, Robert Feldhans, Fabian Hinder, Barbara Hammer
Furthermore, we thoroughly investigate the existing cosine-based scores and their limitations in order to show why these scores fail to report biases in some situations.
no code implementations • 15 Nov 2021 • Sarah Schröder, Alexander Schulz, Philip Kenneweg, Robert Feldhans, Fabian Hinder, Barbara Hammer
However, lately some works have raised doubts about these metrics showing that even though such metrics report low biases, other tests still show biases.