Search Results for author: Aina Garí Soler

Found 11 papers, 7 papers with code

ALL Dolphins Are Intelligent and SOME Are Friendly: Probing BERT for Nouns’ Semantic Properties and their Prototypicality

1 code implementation EMNLP (BlackboxNLP) 2021 Marianna Apidianaki, Aina Garí Soler

Large scale language models encode rich commonsense knowledge acquired through exposure to massive data during pre-training, but their understanding of entities and their semantic properties is unclear.

Polysemy in Spoken Conversations and Written Texts

1 code implementation LREC 2022 Aina Garí Soler, Matthieu Labeau, Chloé Clavel

Our discourses are full of potential lexical ambiguities, due in part to the pervasive use of words having multiple senses.

The Impact of Word Splitting on the Semantic Content of Contextualized Word Representations

1 code implementation22 Feb 2024 Aina Garí Soler, Matthieu Labeau, Chloé Clavel

When deriving contextualized word representations from language models, a decision needs to be made on how to obtain one for out-of-vocabulary (OOV) words that are segmented into subwords.

Semantic Similarity Semantic Textual Similarity

ALL Dolphins Are Intelligent and SOME Are Friendly: Probing BERT for Nouns' Semantic Properties and their Prototypicality

no code implementations12 Oct 2021 Marianna Apidianaki, Aina Garí Soler

Large scale language models encode rich commonsense knowledge acquired through exposure to massive data during pre-training, but their understanding of entities and their semantic properties is unclear.

Scalar Adjective Identification and Multilingual Ranking

no code implementations NAACL 2021 Aina Garí Soler, Marianna Apidianaki

The intensity relationship that holds between scalar adjectives (e. g., nice < great < wonderful) is highly relevant for natural language inference and common-sense reasoning.

Binary Classification Common Sense Reasoning +1

Let's Play Mono-Poly: BERT Can Reveal Words' Polysemy Level and Partitionability into Senses

1 code implementation29 Apr 2021 Aina Garí Soler, Marianna Apidianaki

Pre-trained language models (LMs) encode rich information about linguistic structure but their knowledge about lexical polysemy remains unclear.

Exploring sentence informativeness

no code implementations JEPTALNRECITAL 2019 Syrielle Montariol, Aina Garí Soler, Alexandre Allauzen

This study is a preliminary exploration of the concept of informativeness -how much information a sentence gives about a word it contains- and its potential benefits to building quality word representations from scarce data.

Informativeness Sentence +1

Cannot find the paper you are looking for? You can Submit a new open access paper.