no code implementations • 8 Jul 2022 • Roland Roller, Laura Seiffe, Ammer Ayach, Sebastian Möller, Oliver Marten, Michael Mikhailov, Christoph Alt, Danilo Schmidt, Fabian Halleck, Marcel Naik, Wiebke Duettmann, Klemens Budde
However, in the context of clinical text processing the number of accessible datasets is scarce -- and so is the number of existing tools.
1 code implementation • nlppower (ACL) 2022 • David Harbecke, Yuxuan Chen, Leonhard Hennig, Christoph Alt
Relation classification models are conventionally evaluated using only a single measure, e. g., micro-F1, macro-F1 or AUC.
1 code implementation • RepL4NLP (ACL) 2022 • Yuxuan Chen, Jonas Mikkelsen, Arne Binder, Christoph Alt, Leonhard Hennig
Pre-trained language models (PLM) are effective components of few-shot named entity recognition (NER) approaches when augmented with continued pre-training on task-specific out-of-domain data or fine-tuning on in-domain data.
Contrastive Learning Low Resource Named Entity Recognition +4
1 code implementation • SEMEVAL 2020 • Marc Hübner, Christoph Alt, Robert Schwarzenberg, Leonhard Hennig
Definition Extraction systems are a valuable knowledge source for both humans and algorithms.
no code implementations • WS 2020 • Hanchu Zhang, Leonhard Hennig, Christoph Alt, Changjian Hu, Yao Meng, Chao Wang
Named Entity Recognition (NER) in domains like e-commerce is an understudied problem due to the lack of annotated datasets.
1 code implementation • ACL 2020 • Christoph Alt, Aleksandra Gabryszak, Leonhard Hennig
TACRED (Zhang et al., 2017) is one of the largest, most widely used crowdsourced datasets in Relation Extraction (RE).
1 code implementation • ACL 2020 • David Harbecke, Christoph Alt
Recently, state-of-the-art NLP models gained an increasing syntactic and semantic understanding of language, and explanation methods are crucial to understand their decisions.
2 code implementations • ACL 2020 • Christoph Alt, Aleksandra Gabryszak, Leonhard Hennig
Despite the recent progress, little is known about the features captured by state-of-the-art neural relation extraction (RE) models.
1 code implementation • WS 2019 • Robert Schwarzenberg, Marc Hübner, David Harbecke, Christoph Alt, Leonhard Hennig
Representations in the hidden layers of Deep Neural Networks (DNN) are often hard to interpret since it is difficult to project them into an interpretable domain.
1 code implementation • ACL 2019 • Christoph Alt, Marc Hübner, Leonhard Hennig
Distantly supervised relation extraction is widely used to extract relational facts from text, but suffers from noisy labels.
1 code implementation • Automated Knowledge Base Construction Conference 2019 • Christoph Alt, Marc Hübner, Leonhard Hennig
Unlike previous relation extraction models, TRE uses pre-trained deep language representations instead of explicit linguistic features to inform the relation classification and combines it with the self-attentive Transformer architecture to effectively model long-range dependencies between entity mentions.
Ranked #24 on Relation Extraction on SemEval-2010 Task-8
1 code implementation • WS 2018 • David Harbecke, Robert Schwarzenberg, Christoph Alt
PatternAttribution is a recent method, introduced in the vision domain, that explains classifications of deep neural networks.