Search Results for author: Yova Kementchedjhieva

Found 28 papers, 10 papers with code

A Multilingual Benchmark for Probing Negation-Awareness with Minimal Pairs

1 code implementation CoNLL (EMNLP) 2021 Mareike Hartmann, Miryam de Lhoneux, Daniel Hershcovich, Yova Kementchedjhieva, Lukas Nielsen, Chen Qiu, Anders Søgaard

Negation is one of the most fundamental concepts in human cognition and language, and several natural language inference (NLI) probes have been designed to investigate pretrained language models’ ability to detect and reason with negation.

Natural Language Inference Negation

MuLan: A Study of Fact Mutability in Language Models

1 code implementation3 Apr 2024 Constanza Fierro, Nicolas Garneau, Emanuele Bugliarello, Yova Kementchedjhieva, Anders Søgaard

Facts are subject to contingencies and can be true or false in different circumstances.

Multimodal Large Language Models to Support Real-World Fact-Checking

no code implementations6 Mar 2024 Jiahui Geng, Yova Kementchedjhieva, Preslav Nakov, Iryna Gurevych

To the best of our knowledge, we are the first to evaluate MLLMs for real-world fact-checking.

Fact Checking

Cultural Adaptation of Recipes

no code implementations26 Oct 2023 Yong Cao, Yova Kementchedjhieva, Ruixiang Cui, Antonia Karamolegkou, Li Zhou, Megan Dare, Lucia Donatelli, Daniel Hershcovich

We introduce a new task involving the translation and cultural adaptation of recipes between Chinese and English-speaking cuisines.

Information Retrieval Machine Translation +1

Structural Similarities Between Language Models and Neural Response Measurements

1 code implementation2 Jun 2023 Jiaang Li, Antonia Karamolegkou, Yova Kementchedjhieva, Mostafa Abdou, Sune Lehmann, Anders Søgaard

Human language processing is also opaque, but neural response measurements can provide (noisy) recordings of activation during listening or reading, from which we can extract similar representations of words and phrases.

Brain Decoding

Retrieval-augmented Multi-label Text Classification

no code implementations22 May 2023 Ilias Chalkidis, Yova Kementchedjhieva

Multi-label text classification (MLC) is a challenging task in settings of large label sets, where label support follows a Zipfian distribution.

Multi Label Text Classification Multi-Label Text Classification +2

An Exploration of Encoder-Decoder Approaches to Multi-Label Classification for Legal and Biomedical Text

1 code implementation9 May 2023 Yova Kementchedjhieva, Ilias Chalkidis

Standard methods for multi-label text classification largely rely on encoder-only pre-trained language models, whereas encoder-decoder models have proven more effective in other classification tasks.

Multi-Label Classification Multi Label Text Classification +2

Implications of the Convergence of Language and Vision Model Geometries

no code implementations13 Feb 2023 Jiaang Li, Yova Kementchedjhieva, Anders Søgaard

Large-scale pretrained language models (LMs) are said to ``lack the ability to connect [their] utterances to the world'' (Bender and Koller, 2020).

SmallCap: Lightweight Image Captioning Prompted with Retrieval Augmentation

1 code implementation CVPR 2023 Rita Ramos, Bruno Martins, Desmond Elliott, Yova Kementchedjhieva

Recent advances in image captioning have focused on scaling the data and model size, substantially increasing the cost of pre-training and finetuning.

Image Captioning Retrieval

Dynamic Forecasting of Conversation Derailment

no code implementations EMNLP 2021 Yova Kementchedjhieva, Anders Søgaard

This approach shows mixed results: in a high-quality data setting, a longer average forecast horizon can be achieved at the cost of a small drop in F1; in a low-quality data setting, however, dynamic training propagates the noise and is highly detrimental to performance.

John praised Mary because he? Implicit Causality Bias and Its Interaction with Explicit Cues in LMs

no code implementations2 Jun 2021 Yova Kementchedjhieva, Mark Anderson, Anders Søgaard

We hypothesize that the temporary challenge humans face in integrating the two contradicting signals, one from the lexical semantics of the verb, one from the sentence-level semantics, would be reflected in higher error rates for models on tasks dependent on causal links.

Attribute Sentence

The ApposCorpus: A new multilingual, multi-domain dataset for factual appositive generation

no code implementations COLING 2020 Yova Kementchedjhieva, Di Lu, Joel Tetreault

News articles, image captions, product reviews and many other texts mention people and organizations whose name recognition could vary for different audiences.

Image Captioning Text Generation

PuzzLing Machines: A Challenge on Learning From Small Data

no code implementations ACL 2020 Gözde Gül Şahin, Yova Kementchedjhieva, Phillip Rust, Iryna Gurevych

To expose this problem in a new light, we introduce a challenge on learning from small data, PuzzLing Machines, which consists of Rosetta Stone puzzles from Linguistic Olympiads for high school students.

Small Data Image Classification

Comparing Unsupervised Word Translation Methods Step by Step

no code implementations NeurIPS 2019 Mareike Hartmann, Yova Kementchedjhieva, Anders Søgaard

Cross-lingual word vector space alignment is the task of mapping the vocabularies of two languages into a shared semantic space, which can be used for dictionary induction, unsupervised machine translation, and transfer learning.

Transfer Learning Translation +2

Adversarial Removal of Demographic Attributes Revisited

no code implementations IJCNLP 2019 Maria Barrett, Yova Kementchedjhieva, Yanai Elazar, Desmond Elliott, Anders S{\o}gaard

Elazar and Goldberg (2018) showed that protected attributes can be extracted from the representations of a debiased neural network for mention detection at above-chance levels, by evaluating a diagnostic classifier on a held-out subsample of the data it was trained on.

Lost in Evaluation: Misleading Benchmarks for Bilingual Dictionary Induction

2 code implementations IJCNLP 2019 Yova Kementchedjhieva, Mareike Hartmann, Anders Søgaard

We study the composition and quality of the test sets for five diverse languages from this dataset, with concerning findings: (1) a quarter of the data consists of proper nouns, which can be hardly indicative of BDI performance, and (2) there are pervasive gaps in the gold-standard targets.

Cross-Lingual Word Embeddings Word Embeddings

Uncovering Probabilistic Implications in Typological Knowledge Bases

no code implementations ACL 2019 Johannes Bjerva, Yova Kementchedjhieva, Ryan Cotterell, Isabelle Augenstein

The study of linguistic typology is rooted in the implications we find between linguistic features, such as the fact that languages with object-verb word ordering tend to have post-positions.

Knowledge Base Population

A Probabilistic Generative Model of Linguistic Typology

1 code implementation NAACL 2019 Johannes Bjerva, Yova Kementchedjhieva, Ryan Cotterell, Isabelle Augenstein

In the principles-and-parameters framework, the structural features of languages depend on parameters that may be toggled on or off, with a single parameter often dictating the status of multiple features.

`Indicatements' that character language models learn English morpho-syntactic units and regularities

no code implementations WS 2018 Yova Kementchedjhieva, Adam Lopez

Character language models have access to surface morphological patterns, but it is not clear whether or \textit{how} they learn abstract morphological regularities.

Feature Engineering Language Modelling +3

Copenhagen at CoNLL--SIGMORPHON 2018: Multilingual Inflection in Context with Explicit Morphosyntactic Decoding

no code implementations CONLL 2018 Yova Kementchedjhieva, Johannes Bjerva, Isabelle Augenstein

This paper documents the Team Copenhagen system which placed first in the CoNLL--SIGMORPHON 2018 shared task on universal morphological reinflection, Task 2 with an overall accuracy of 49. 87.

LEMMA Morphological Inflection +2

Why is unsupervised alignment of English embeddings from different algorithms so hard?

no code implementations EMNLP 2018 Mareike Hartmann, Yova Kementchedjhieva, Anders Søgaard

This paper presents a challenge to the community: Generative adversarial networks (GANs) can perfectly align independent English word embeddings induced using the same algorithm, based on distributional information alone; but fails to do so, for two different embeddings algorithms.

Word Embeddings

Generalizing Procrustes Analysis for Better Bilingual Dictionary Induction

1 code implementation CONLL 2018 Yova Kementchedjhieva, Sebastian Ruder, Ryan Cotterell, Anders Søgaard

Most recent approaches to bilingual dictionary induction find a linear alignment between the word vector spaces of two languages.

Indicatements that character language models learn English morpho-syntactic units and regularities

no code implementations31 Aug 2018 Yova Kementchedjhieva, Adam Lopez

Character language models have access to surface morphological patterns, but it is not clear whether or how they learn abstract morphological regularities.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.