1 code implementation • NAACL (ClinicalNLP) 2022 • Henning Schäfer, Ahmad Idrissi-Yaghir, Peter Horn, Christoph Friedrich
In this work, cross-linguistic span prediction based on contextualized word embedding models is used together with neural machine translation (NMT) to transfer and apply the state-of-the-art models in natural language processing (NLP) to a low-resource language clinical corpus.
no code implementations • 8 Apr 2024 • Ahmad Idrissi-Yaghir, Amin Dada, Henning Schäfer, Kamyar Arzideh, Giulia Baldini, Jan Trienes, Max Hasin, Jeanette Bewersdorff, Cynthia S. Schmidt, Marie Bauer, Kaleb E. Smith, Jiang Bian, Yonghui Wu, Jörg Schlötterer, Torsten Zesch, Peter A. Horn, Christin Seifert, Felix Nensa, Jens Kleesiek, Christoph M. Friedrich
Recent advances in natural language processing (NLP) can be largely attributed to the advent of pre-trained language models such as BERT and RoBERTa.
no code implementations • 11 Oct 2023 • Amin Dada, Aokun Chen, Cheng Peng, Kaleb E Smith, Ahmad Idrissi-Yaghir, Constantin Marc Seibold, Jianning Li, Lars Heiliger, Xi Yang, Christoph M. Friedrich, Daniel Truhn, Jan Egger, Jiang Bian, Jens Kleesiek, Yonghui Wu
Traditionally, large language models have been either trained on general web crawls or domain-specific data.
no code implementations • 12 Dec 2022 • Ahmad Idrissi-Yaghir, Henning Schäfer, Nadja Bauer, Christoph M. Friedrich
For the subtask Relevance Classification, the best models achieve a micro-averaged $F1$-Score of 96. 1 % on the first test set and 95. 9 % on the second one, and a score of 85. 1 % and 85. 3 % for the subtask Polarity Classification.