1 code implementation • CoNLL (EMNLP) 2021 • Mareike Hartmann, Miryam de Lhoneux, Daniel Hershcovich, Yova Kementchedjhieva, Lukas Nielsen, Chen Qiu, Anders Søgaard
Negation is one of the most fundamental concepts in human cognition and language, and several natural language inference (NLI) probes have been designed to investigate pretrained language models’ ability to detect and reason with negation.
no code implementations • EACL (Louhi) 2021 • Mareike Hartmann, Anders Søgaard
Negation scope resolution is key to high-quality information extraction from clinical texts, but so far, efforts to make encoders used for information extraction negation-aware have been limited to English.
no code implementations • 8 Nov 2023 • Archiki Prasad, Alexander Koller, Mareike Hartmann, Peter Clark, Ashish Sabharwal, Mohit Bansal, Tushar Khot
Large Language Models (LLMs) are increasingly being used for interactive decision-making tasks requiring planning and adapting to the environment.
no code implementations • 6 Jun 2023 • Aliki Anagnostopoulou, Mareike Hartmann, Daniel Sonntag
Image Captioning (IC) models can highly benefit from human feedback in the training process, especially in cases where data is limited.
no code implementations • 6 Jun 2023 • Aliki Anagnostopoulou, Mareike Hartmann, Daniel Sonntag
Interactive machine learning (IML) is a beneficial learning paradigm in cases of limited data availability, as human feedback is incrementally integrated into the training process.
no code implementations • 24 Jan 2023 • Siting Liang, Mareike Hartmann, Daniel Sonntag
This paper presents our project proposal for extracting biomedical information from German clinical narratives with limited amounts of annotations.
no code implementations • LNLS (ACL) 2022 • Mareike Hartmann, Daniel Sonntag
Training a model with access to human explanations can improve data efficiency and model performance on in- and out-of-domain data.
no code implementations • 28 Feb 2022 • Mareike Hartmann, Aliki Anagnostopoulou, Daniel Sonntag
We propose an approach for interactive learning for an image captioning model.
1 code implementation • Findings (EMNLP) 2021 • Rasmus Kær Jørgensen, Mareike Hartmann, Xiang Dai, Desmond Elliott
Domain adaptive pretraining, i. e. the continued unsupervised pretraining of a language model on domain-specific text, improves the modelling of text for downstream tasks within the domain.
no code implementations • NeurIPS 2019 • Mareike Hartmann, Yova Kementchedjhieva, Anders Søgaard
Cross-lingual word vector space alignment is the task of mapping the vocabularies of two languages into a shared semantic space, which can be used for dictionary induction, unsupervised machine translation, and transfer learning.
1 code implementation • WS 2019 • Mareike Hartmann, Yevgeniy Golovchenko, Isabelle Augenstein
In this work, we examine to what extent text classifiers can be used to label data for subsequent content analysis, in particular we focus on predicting pro-Russian and pro-Ukrainian Twitter content related to the MH17 plane crash.
2 code implementations • IJCNLP 2019 • Yova Kementchedjhieva, Mareike Hartmann, Anders Søgaard
We study the composition and quality of the test sets for five diverse languages from this dataset, with concerning findings: (1) a quarter of the data consists of proper nouns, which can be hardly indicative of BDI performance, and (2) there are pervasive gaps in the gold-standard targets.
no code implementations • NAACL 2019 • Mareike Hartmann, Tallulah Jansen, Isabelle Augenstein, Anders Søgaard
In online discussion fora, speakers often make arguments for or against something, say birth control, by highlighting certain aspects of the topic.
no code implementations • EMNLP 2018 • Mareike Hartmann, Yova Kementchedjhieva, Anders Søgaard
This paper presents a challenge to the community: Generative adversarial networks (GANs) can perfectly align independent English word embeddings induced using the same algorithm, based on distributional information alone; but fails to do so, for two different embeddings algorithms.
no code implementations • WS 2018 • Mareike Hartmann, Anders Soegaard
Cross-lingual representation learning is an important step in making NLP scale to all the world's languages.