Search Results for author: Somayeh Ghanbarzadeh

Found 2 papers, 0 papers with code

Gender-tuning: Empowering Fine-tuning for Debiasing Pre-trained Language Models

no code implementations20 Jul 2023 Somayeh Ghanbarzadeh, Yan Huang, Hamid Palangi, Radames Cruz Moreno, Hamed Khanpour

Recent studies have revealed that the widely-used Pre-trained Language Models (PLMs) propagate societal biases from the large unmoderated pre-training corpora.

Language Modelling Masked Language Modeling

Improving the Reusability of Pre-trained Language Models in Real-world Applications

no code implementations19 Jul 2023 Somayeh Ghanbarzadeh, Hamid Palangi, Yan Huang, Radames Cruz Moreno, Hamed Khanpour

The reusability of state-of-the-art Pre-trained Language Models (PLMs) is often limited by their generalization problem, where their performance drastically decreases when evaluated on examples that differ from the training dataset, known as Out-of-Distribution (OOD)/unseen examples.

Language Modelling Masked Language Modeling

Cannot find the paper you are looking for? You can Submit a new open access paper.