Search Results for author: Daniel Loureiro

Found 13 papers, 10 papers with code

On the Cross-lingual Transferability of Contextualized Sense Embeddings

no code implementations EMNLP (MRL) 2021 Kiamehr Rezaee, Daniel Loureiro, Jose Camacho-Collados, Mohammad Taher Pilehvar

In this paper we analyze the extent to which contextualized sense embeddings, i. e., sense embeddings that are computed based on contextualized word embeddings, are transferable across languages. To this end, we compiled a unified cross-lingual benchmark for Word Sense Disambiguation.

Word Embeddings Word Sense Disambiguation

Tweet Insights: A Visualization Platform to Extract Temporal Insights from Twitter

no code implementations4 Aug 2023 Daniel Loureiro, Kiamehr Rezaee, Talayeh Riahi, Francesco Barbieri, Leonardo Neves, Luis Espinosa Anke, Jose Camacho-Collados

This paper introduces a large collection of time series data derived from Twitter, postprocessed using word embedding techniques, as well as specialized fine-tuned language models.

Time Series

Probing Commonsense Knowledge in Pre-trained Language Models with Sense-level Precision and Expanded Vocabulary

1 code implementation12 Oct 2022 Daniel Loureiro, Alípio Mário Jorge

However, this approach is restricted by the LM's vocabulary available for masked predictions, and its precision is subject to the context provided by the assertion.

Question Answering

LMMS Reloaded: Transformer-based Sense Embeddings for Disambiguation and Beyond

1 code implementation26 May 2021 Daniel Loureiro, Alípio Mário Jorge, Jose Camacho-Collados

Prior work has shown that these contextual representations can be used to accurately represent large sense inventories as sense embeddings, to the extent that a distance-based solution to Word Sense Disambiguation (WSD) tasks outperforms models trained specifically for the task.

Word Sense Disambiguation

Analysis and Evaluation of Language Models for Word Sense Disambiguation

1 code implementation CL (ACL) 2021 Daniel Loureiro, Kiamehr Rezaee, Mohammad Taher Pilehvar, Jose Camacho-Collados

We also perform an in-depth comparison of the two main language model based WSD strategies, i. e., fine-tuning and feature extraction, finding that the latter approach is more robust with respect to sense bias and it can better exploit limited available training data.

Language Modelling Word Sense Disambiguation

Don't Neglect the Obvious: On the Role of Unambiguous Words in Word Sense Disambiguation

1 code implementation EMNLP 2020 Daniel Loureiro, Jose Camacho-Collados

State-of-the-art methods for Word Sense Disambiguation (WSD) combine two different features: the power of pre-trained language models and a propagation method to extend the coverage of such models.

Word Sense Disambiguation

Language Modelling Makes Sense: Propagating Representations through WordNet for Full-Coverage Word Sense Disambiguation

1 code implementation ACL 2019 Daniel Loureiro, Alipio Jorge

Contextual embeddings represent a new generation of semantic representations learned from Neural Language Modelling (NLM) that addresses the issue of meaning conflation hampering traditional word embeddings.

Language Modelling LEMMA +2

LIAAD at SemDeep-5 Challenge: Word-in-Context (WiC)

1 code implementation WS 2019 Daniel Loureiro, Alipio Jorge

This paper describes the LIAAD system that was ranked second place in the Word-in-Context challenge (WiC) featured in SemDeep-5.

Word Sense Disambiguation

Cannot find the paper you are looking for? You can Submit a new open access paper.