no code implementations • EMNLP (BlackboxNLP) 2020 • Hande Celikkanat, Sami Virpioja, Jörg Tiedemann, Marianna Apidianaki
Contextualized word representations encode rich information about syntax and semantics, alongside specificities of each context of use.
no code implementations • NAACL 2022 • Qing Lyu, Zheng Hua, Daoxin Li, Li Zhang, Marianna Apidianaki, Chris Callison-Burch
We introduce the Recursive Noun Phrase Challenge (RNPC), a dataset of three textual inference tasks involving textual entailment and event plausibility comparison, precisely targeting the understanding of recursive NPs.
1 code implementation • EMNLP (BlackboxNLP) 2021 • Marianna Apidianaki, Aina Garí Soler
Large scale language models encode rich commonsense knowledge acquired through exposure to massive data during pre-training, but their understanding of entities and their semantic properties is unclear.
no code implementations • 12 Mar 2024 • Timothee Mickus, Elaine Zosa, Raúl Vázquez, Teemu Vahtola, Jörg Tiedemann, Vincent Segonne, Alessandro Raganato, Marianna Apidianaki
This paper presents the results of the SHROOM, a shared task focused on detecting hallucinations: outputs from natural language generation (NLG) systems that are fluent, yet inaccurate.
no code implementations • 21 Feb 2024 • Qing Lyu, Kumar Shridhar, Chaitanya Malaviya, Li Zhang, Yanai Elazar, Niket Tandon, Marianna Apidianaki, Mrinmaya Sachan, Chris Callison-Burch
Accurately gauging the confidence level of Large Language Models' (LLMs) predictions is pivotal for their reliable application.
no code implementations • 29 May 2023 • Qing Lyu, Marianna Apidianaki, Chris Callison-Burch
The representation space of pretrained Language Models (LMs) encodes rich information about words and their relationships (e. g., similarity, hypernymy, polysemy) as well as abstract semantic notions (e. g., intensity).
1 code implementation • 24 May 2023 • Tuhin Chakrabarty, Arkadiy Saakyan, Olivia Winn, Artemis Panagopoulou, Yue Yang, Marianna Apidianaki, Smaranda Muresan
We propose to solve the task through the collaboration between Large Language Models (LLMs) and Diffusion Models: Instruct GPT-3 (davinci-002) with Chain-of-Thought prompting generates text that represents a visual elaboration of the linguistic metaphor containing the implicit meaning and relevant objects, which is then used as input to the diffusion-based text-to-image models. Using a human-AI collaboration framework, where humans interact both with the LLM and the top-performing diffusion model, we create a high-quality dataset containing 6, 476 visual metaphors for 1, 540 linguistic metaphors and their associated visual elaborations.
1 code implementation • 8 May 2023 • Josh Magnus Ludan, Yixuan Meng, Tai Nguyen, Saurabh Shah, Qing Lyu, Marianna Apidianaki, Chris Callison-Burch
Large Language Models (LLMs) are so powerful that they sometimes learn correlations between labels and features that are irrelevant to the task, leading to poor generalization on out-of-distribution data.
1 code implementation • 31 Jan 2023 • Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna Apidianaki, Chris Callison-Burch
While Chain-of-Thought (CoT) prompting boosts Language Models' (LM) performance on a gamut of complex reasoning tasks, the generated reasoning chain does not necessarily reflect how the model arrives at the answer (aka.
1 code implementation • 24 Oct 2022 • Yue Yang, Artemis Panagopoulou, Marianna Apidianaki, Mark Yatskar, Chris Callison-Burch
We propose to extract these properties from images and use them in an ensemble model, in order to complement the information that is extracted from language models.
no code implementations • 22 Sep 2022 • Qing Lyu, Marianna Apidianaki, Chris Callison-Burch
In this survey, we review over 110 model explanation methods in NLP through the lens of faithfulness.
1 code implementation • *SEM (NAACL) 2022 • Aarne Talman, Marianna Apidianaki, Stergios Chatzikyriakidis, Jörg Tiedemann
A central question in natural language understanding (NLU) research is whether high performance demonstrates the models' strong reasoning capabilities.
1 code implementation • 15 Dec 2021 • Qing Lyu, Hua Zheng, Daoxin Li, Li Zhang, Marianna Apidianaki, Chris Callison-Burch
We introduce the Recursive Noun Phrase Challenge (RNPC), a dataset of three textual inference tasks involving textual entailment and event plausibility comparison, precisely targeting the understanding of recursive NPs.
no code implementations • 12 Oct 2021 • Marianna Apidianaki, Aina Garí Soler
Large scale language models encode rich commonsense knowledge acquired through exposure to massive data during pre-training, but their understanding of entities and their semantic properties is unclear.
no code implementations • NAACL 2021 • Aina Garí Soler, Marianna Apidianaki
The intensity relationship that holds between scalar adjectives (e. g., nice < great < wonderful) is highly relevant for natural language inference and common-sense reasoning.
1 code implementation • 29 Apr 2021 • Aina Garí Soler, Marianna Apidianaki
Pre-trained language models (LMs) encode rich information about linguistic structure but their knowledge about lexical polysemy remains unclear.
1 code implementation • NoDaLiDa 2021 • Aarne Talman, Marianna Apidianaki, Stergios Chatzikyriakidis, Jörg Tiedemann
We propose a new diagnostics test suite which allows to assess whether a dataset constitutes a good testbed for evaluating the models' meaning understanding capabilities.
no code implementations • 22 Dec 2020 • Reno Kriz, Marianna Apidianaki, Chris Callison-Burch
Text simplification systems generate versions of texts that are easier to understand for a broader audience.
1 code implementation • EMNLP 2020 • Aina Garí Soler, Marianna Apidianaki
Adjectives like pretty, beautiful and gorgeous describe positive properties of the nouns they modify but with different intensity.
1 code implementation • SEMEVAL 2020 • Aina Garí Soler, Marianna Apidianaki
Our best English models occupy the third and fourth positions in the ranking for the two subtasks.
no code implementations • IJCNLP 2019 • Stratos Xenouleas, Prodromos Malakasiotis, Marianna Apidianaki, Ion Androutsopoulos
We propose SUM-QE, a novel Quality Estimation model for summarization based on BERT.
1 code implementation • 2 Sep 2019 • Stratos Xenouleas, Prodromos Malakasiotis, Marianna Apidianaki, Ion Androutsopoulos
We propose SumQE, a novel Quality Estimation model for summarization based on BERT.
1 code implementation • WS 2019 • Sotiris Kotitsas, Dimitris Pappas, Ion Androutsopoulos, Ryan Mcdonald, Marianna Apidianaki
Many existing NE methods rely only on network structure, overlooking other information associated with the nodes, e. g., text describing the nodes.
no code implementations • SEMEVAL 2019 • Aina Garí Soler, Marianna Apidianaki, Alexandre Allauzen
Usage similarity estimation addresses the semantic proximity of word instances in different contexts.
no code implementations • WS 2019 • Aina Gar{\'\i} Soler, Anne Cocos, Marianna Apidianaki, Chris Callison-Burch
Word embedding representations provide good estimates of word meaning and give state-of-the art performance in semantic tasks.
2 code implementations • NAACL 2019 • Reno Kriz, João Sedoc, Marianna Apidianaki, Carolina Zheng, Gaurav Kumar, Eleni Miltsakaki, Chris Callison-Burch
Sentence simplification is the task of rewriting texts so they are easier to understand.
Ranked #4 on Text Simplification on Newsela
1 code implementation • EMNLP 2018 • Ajay Patel, Alexander Sands, Chris Callison-Burch, Marianna Apidianaki
Vector space embedding models like word2vec, GloVe, fastText, and ELMo are extremely popular representations in natural language processing (NLP) applications.
no code implementations • EMNLP 2018 • Anne Cocos, Skyler Wharton, Ellie Pavlick, Marianna Apidianaki, Chris Callison-Burch
Adjectives like {``}warm{''}, {``}hot{''}, and {``}scalding{''} all describe temperature but differ in intensity.
no code implementations • NAACL 2018 • Anne Cocos, Marianna Apidianaki, Chris Callison-Burch
In this paper, we present a head-to-head comparison of six taxonomic organization algorithms that vary with respect to their structural and transitivity constraints, and treatment of synonymy.
no code implementations • NAACL 2018 • Reno Kriz, Eleni Miltsakaki, Marianna Apidianaki, Chris Callison-Burch
Lexical simplification involves identifying complex words or phrases that need to be simplified, and recommending simpler meaning-preserving substitutes that can be more easily understood.
no code implementations • NAACL 2018 • Marianna Apidianaki, Guillaume Wisniewski, Anne Cocos, Chris Callison-Burch
We propose a variant of a well-known machine translation (MT) evaluation metric, HyTER (Dreyer and Marcu, 2012), which exploits reference translations enriched with meaning equivalent expressions.
no code implementations • JEPTALNRECITAL 2018 • Aina Gar{\'\i} Soler, Marianna Apidianaki, Alex Allauzen, re
Lexical complexity detection is an important step for automatic text simplification which serves to make informed lexical substitutions.
no code implementations • EMNLP 2017 • Ross Mechanic, Dean Fulgoni, Hannah Cutler, Sneha Rajana, Zheyuan Liu, Bradley Jackson, Anne Cocos, Chris Callison-Burch, Marianna Apidianaki
Semantic relation knowledge is crucial for natural language understanding.
no code implementations • EMNLP 2017 • Derry Tanti Wijaya, Brendan Callahan, John Hewitt, Jie Gao, Xiao Ling, Marianna Apidianaki, Chris Callison-Burch
Bilingual Lexicon Induction is the task of learning word translations without bilingual parallel corpora.
no code implementations • SEMEVAL 2017 • Anne Cocos, Marianna Apidianaki, Chris Callison-Burch
WordNet has facilitated important research in natural language processing but its usefulness is somewhat limited by its relatively small lexical coverage.
no code implementations • SEMEVAL 2017 • Sneha Rajana, Chris Callison-Burch, Marianna Apidianaki, Vered Shwartz
Recognizing and distinguishing antonyms from other types of semantic relations is an essential part of language understanding systems.
no code implementations • WS 2017 • Anne Cocos, Marianna Apidianaki, Chris Callison-Burch
The role of word sense disambiguation in lexical substitution has been questioned due to the high performance of vector space models which propose good substitutes without explicitly accounting for sense.
no code implementations • JEPTALNRECITAL 2016 • Fran{\c{c}}ois Yvon, Yong Xu, Marianna Apidianaki, Cl{\'e}ment Pillias, Cubaud Pierre
Le travail qui a conduit {\`a} cette d{\'e}monstration combine des outils de traitement des langues multilingues, en particulier l{'}alignement automatique, avec des techniques de visualisation et d{'}interaction.
no code implementations • SEMEVAL 2016 • Maria Pontiki, Dimitris Galanis, Haris Papageorgiou, Ion Androutsopoulos, Man, Suresh har, Mohammad AL-Smadi, Mahmoud Al-Ayyoub, Yanyan Zhao, Bing Qin, Orph{\'e}e De Clercq, V{\'e}ronique Hoste, Marianna Apidianaki, Xavier Tannier, Natalia Loukachevitch, Evgeniy Kotelnikov, Nuria Bel, Salud Mar{\'\i}a Jim{\'e}nez-Zafra, G{\"u}l{\c{s}}en Eryi{\u{g}}it
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +2
no code implementations • LREC 2016 • Marianna Apidianaki, Xavier Tannier, C{\'e}cile Richart
Aspect Based Sentiment Analysis (ABSA) is the task of mining and summarizing opinions from text about specific entities and their aspects.
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA)
no code implementations • LREC 2014 • Marianna Apidianaki, Emilia Verzeni, Diana McCarthy
Paraphrases extracted from parallel corpora by the pivot method (Bannard and Callison-Burch, 2005) constitute a valuable resource for multilingual NLP applications.
no code implementations • LREC 2012 • Marianna Apidianaki, Beno{\^\i}t Sagot
The automatic development of semantic resources constitutes an important challenge in the NLP community.
no code implementations • LREC 2012 • Kata G{\'a}bor, Marianna Apidianaki, Beno{\^\i}t Sagot, {\'E}ric Villemonte de la Clergerie
In this article, we present a distributional analysis method for extracting nominalization relations from monolingual corpora.