no code implementations • 28 Nov 2023 • Amos Calamida, Farhad Nooralahzadeh, Morteza Rohanian, Koji Fujimoto, Mizuho Nishio, Michael Krauthammer
Furthermore, we demonstrate that one of our checkpoints exhibits a high correlation with human judgment, as assessed using the publicly available annotations of six board-certified radiologists, using a set of 200 reports.
no code implementations • 8 May 2023 • Sanghwan Kim, Farhad Nooralahzadeh, Morteza Rohanian, Koji Fujimoto, Mizuho Nishio, Ryo Sakamoto, Fabio Rinaldi, Michael Krauthammer
To tackle this issue, we propose a novel approach that leverages a rule-based labeler to extract comparison prior information from radiology reports.
no code implementations • 8 Feb 2023 • Aron N. Horvath, Matteo Berchier, Farhad Nooralahzadeh, Ahmed Allam, Michael Krauthammer
Methods: We present an extensive evaluation of the impact of different federation and differential privacy techniques when training models on the open-source MIMIC-III dataset.
1 code implementation • 7 Sep 2022 • Farhad Nooralahzadeh, Rico Sennrich
While several benefits were realized for multilingual vision-language pretrained models, recent benchmarks across various tasks and languages showed poor cross-lingual generalisation when multilingually pre-trained vision-language models are applied to non-English data, with a large gap between (supervised) English performance and (zero-shot) cross-lingual transfer.
1 code implementation • Findings (EMNLP) 2021 • Farhad Nooralahzadeh, Nicolas Perez Gonzalez, Thomas Frauenfelder, Koji Fujimoto, Michael Krauthammer
Inspired by Curriculum Learning, we propose a consecutive (i. e., image-to-text-to-text) generation framework where we divide the problem of radiology report generation into two steps.
no code implementations • 9 Nov 2020 • Farhad Nooralahzadeh
Real-world applications of natural language processing (NLP) are challenging.
1 code implementation • EMNLP 2020 • Farhad Nooralahzadeh, Giannis Bekoulis, Johannes Bjerva, Isabelle Augenstein
We show that this challenging setup can be approached using meta-learning, where, in addition to training a source language model, another model learns to select which training instances are the most beneficial to the first.
no code implementations • WS 2019 • Farhad Nooralahzadeh, Jan Tore L{\o}nning, Lilja {\O}vrelid
The outcome of distant supervision for NER, however, is often noisy.
no code implementations • WS 2018 • Farhad Nooralahzadeh, Lilja {\O}vrelid
The experiments show that the pipeline with simple Cosine Similarity using TFIDF in sentence selection along with DA model as labelling model achieves the best results on the development set (F1 evidence: 32. 17, label accuracy: 59. 61 and FEVER score: 0. 3778).
no code implementations • WS 2018 • Farhad Nooralahzadeh, Lilja Øvrelid
We investigate the use of different syntactic dependency representations in a neural relation classification task and compare the CoNLL, Stanford Basic and Universal Dependencies schemes.
no code implementations • SEMEVAL 2018 • Farhad Nooralahzadeh, Lilja Øvrelid, Jan Tore Lønning
This article presents the SIRIUS-LTG-UiO system for the SemEval 2018 Task 7 on Semantic Relation Extraction and Classification in Scientific Papers.