Search Results for author: Ahmed Alajrami

Found 2 papers, 1 papers with code

Understanding the Role of Input Token Characters in Language Models: How Does Information Loss Affect Performance?

no code implementations26 Oct 2023 Ahmed Alajrami, Katerina Margatina, Nikolaos Aletras

Understanding how and what pre-trained language models (PLMs) learn about language is an open challenge in natural language processing.

How does the pre-training objective affect what large language models learn about linguistic properties?

1 code implementation ACL 2022 Ahmed Alajrami, Nikolaos Aletras

Several pre-training objectives, such as masked language modeling (MLM), have been proposed to pre-train language models (e. g. BERT) with the aim of learning better language representations.

Language Modelling Masked Language Modeling

Cannot find the paper you are looking for? You can Submit a new open access paper.