Search Results for author: Pieter Delobelle

Found 13 papers, 7 papers with code

Measuring Fairness with Biased Rulers: A Comparative Study on Bias Metrics for Pre-trained Language Models

no code implementations NAACL 2022 Pieter Delobelle, Ewoenam Tokpo, Toon Calders, Bettina Berendt

We survey the literature on fairness metrics for pre-trained language models and experimentally evaluate compatibility, including both biases in language models and in their downstream tasks.

Attribute Fairness

How Far Can It Go?: On Intrinsic Gender Bias Mitigation for Text Classification

1 code implementation30 Jan 2023 Ewoenam Tokpo, Pieter Delobelle, Bettina Berendt, Toon Calders

Considering that the end use of these language models is for downstream tasks like text classification, it is important to understand how these intrinsic bias mitigation strategies actually translate to fairness in downstream tasks and the extent of this.

Fairness text-classification +1

RobBERT-2022: Updating a Dutch Language Model to Account for Evolving Language Use

no code implementations15 Nov 2022 Pieter Delobelle, Thomas Winters, Bettina Berendt

To evaluate if our new model is a plug-in replacement for RobBERT, we introduce two additional criteria based on concept drift of existing tokens and alignment for novel tokens. We found that for certain language tasks this update results in a significant performance increase.

Language Modelling

FairDistillation: Mitigating Stereotyping in Language Models

1 code implementation10 Jul 2022 Pieter Delobelle, Bettina Berendt

Large pre-trained language models are successfully being used in a variety of tasks, across many languages.

Knowledge Distillation

RobBERTje: a Distilled Dutch BERT Model

no code implementations28 Apr 2022 Pieter Delobelle, Thomas Winters, Bettina Berendt

We found that the performance of the models using the shuffled versus non-shuffled datasets is similar for most tasks and that randomly merging subsequent sentences in a corpus creates models that train faster and perform better on tasks with long sequences.

Measuring Fairness with Biased Rulers: A Survey on Quantifying Biases in Pretrained Language Models

1 code implementation14 Dec 2021 Pieter Delobelle, Ewoenam Kwaku Tokpo, Toon Calders, Bettina Berendt

We survey the existing literature on fairness metrics for pretrained language models and experimentally evaluate compatibility, including both biases in language models as in their downstream tasks.

Attribute Fairness

Measuring Shifts in Attitudes Towards COVID-19 Measures in Belgium Using Multilingual BERT

1 code implementation20 Apr 2021 Kristen Scott, Pieter Delobelle, Bettina Berendt

We classify seven months' worth of Belgian COVID-related Tweets using multilingual BERT and relate them to their governments' COVID measures.

Dutch Humor Detection by Generating Negative Examples

no code implementations26 Oct 2020 Thomas Winters, Pieter Delobelle

Detecting if a text is humorous is a hard task to do computationally, as it usually requires linguistic and common sense insights.

Binary Classification Common Sense Reasoning +3

Computational Ad Hominem Detection

1 code implementation ACL 2019 Pieter Delobelle, Murilo Cunha, Eric Massip Cano, Jeroen Peperkamp, Bettina Berendt

Fallacies like the personal attack{---}also known as the ad hominem attack{---}are introduced in debates as an easy win, even though they provide no rhetorical contribution.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.