Search Results for author: Mareike Hartmann

Found 16 papers, 4 papers with code

A Multilingual Benchmark for Probing Negation-Awareness with Minimal Pairs

1 code implementation CoNLL (EMNLP) 2021 Mareike Hartmann, Miryam de Lhoneux, Daniel Hershcovich, Yova Kementchedjhieva, Lukas Nielsen, Chen Qiu, Anders Søgaard

Negation is one of the most fundamental concepts in human cognition and language, and several natural language inference (NLI) probes have been designed to investigate pretrained language models’ ability to detect and reason with negation.

Natural Language Inference Negation

Multilingual Negation Scope Resolution for Clinical Text

no code implementations EACL (Louhi) 2021 Mareike Hartmann, Anders Søgaard

Negation scope resolution is key to high-quality information extraction from clinical texts, but so far, efforts to make encoders used for information extraction negation-aware have been limited to English.

Multi-Task Learning Negation +1

ADaPT: As-Needed Decomposition and Planning with Language Models

no code implementations8 Nov 2023 Archiki Prasad, Alexander Koller, Mareike Hartmann, Peter Clark, Ashish Sabharwal, Mohit Bansal, Tushar Khot

Large Language Models (LLMs) are increasingly being used for interactive decision-making tasks requiring planning and adapting to the environment.

Decision Making

Putting Humans in the Image Captioning Loop

no code implementations6 Jun 2023 Aliki Anagnostopoulou, Mareike Hartmann, Daniel Sonntag

Image Captioning (IC) models can highly benefit from human feedback in the training process, especially in cases where data is limited.

Image Captioning

Towards Adaptable and Interactive Image Captioning with Data Augmentation and Episodic Memory

no code implementations6 Jun 2023 Aliki Anagnostopoulou, Mareike Hartmann, Daniel Sonntag

Interactive machine learning (IML) is a beneficial learning paradigm in cases of limited data availability, as human feedback is incrementally integrated into the training process.

Continual Learning Data Augmentation +1

Cross-lingual German Biomedical Information Extraction: from Zero-shot to Human-in-the-Loop

no code implementations24 Jan 2023 Siting Liang, Mareike Hartmann, Daniel Sonntag

This paper presents our project proposal for extracting biomedical information from German clinical narratives with limited amounts of annotations.

Active Learning Transfer Learning

A survey on improving NLP models with human explanations

no code implementations LNLS (ACL) 2022 Mareike Hartmann, Daniel Sonntag

Training a model with access to human explanations can improve data efficiency and model performance on in- and out-of-domain data.

MDAPT: Multilingual Domain Adaptive Pretraining in a Single Model

1 code implementation Findings (EMNLP) 2021 Rasmus Kær Jørgensen, Mareike Hartmann, Xiang Dai, Desmond Elliott

Domain adaptive pretraining, i. e. the continued unsupervised pretraining of a language model on domain-specific text, improves the modelling of text for downstream tasks within the domain.

Language Modelling named-entity-recognition +4

Comparing Unsupervised Word Translation Methods Step by Step

no code implementations NeurIPS 2019 Mareike Hartmann, Yova Kementchedjhieva, Anders Søgaard

Cross-lingual word vector space alignment is the task of mapping the vocabularies of two languages into a shared semantic space, which can be used for dictionary induction, unsupervised machine translation, and transfer learning.

Transfer Learning Translation +2

Mapping (Dis-)Information Flow about the MH17 Plane Crash

1 code implementation WS 2019 Mareike Hartmann, Yevgeniy Golovchenko, Isabelle Augenstein

In this work, we examine to what extent text classifiers can be used to label data for subsequent content analysis, in particular we focus on predicting pro-Russian and pro-Ukrainian Twitter content related to the MH17 plane crash.

Lost in Evaluation: Misleading Benchmarks for Bilingual Dictionary Induction

2 code implementations IJCNLP 2019 Yova Kementchedjhieva, Mareike Hartmann, Anders Søgaard

We study the composition and quality of the test sets for five diverse languages from this dataset, with concerning findings: (1) a quarter of the data consists of proper nouns, which can be hardly indicative of BDI performance, and (2) there are pervasive gaps in the gold-standard targets.

Cross-Lingual Word Embeddings Word Embeddings

Issue Framing in Online Discussion Fora

no code implementations NAACL 2019 Mareike Hartmann, Tallulah Jansen, Isabelle Augenstein, Anders Søgaard

In online discussion fora, speakers often make arguments for or against something, say birth control, by highlighting certain aspects of the topic.

Why is unsupervised alignment of English embeddings from different algorithms so hard?

no code implementations EMNLP 2018 Mareike Hartmann, Yova Kementchedjhieva, Anders Søgaard

This paper presents a challenge to the community: Generative adversarial networks (GANs) can perfectly align independent English word embeddings induced using the same algorithm, based on distributional information alone; but fails to do so, for two different embeddings algorithms.

Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.