1 code implementation • EACL (WANLP) 2021 • Haitham Seelawi, Ibraheem Tuffaha, Mahmoud Gzawi, Wael Farhan, Bashar Talafha, Riham Badawi, Zyad Sober, Oday Al-Dweik, Abed Alhakim Freihat, Hussein Al-Natsheh
The emergence of Multi-task learning (MTL)models in recent years has helped push thestate of the art in Natural Language Un-derstanding (NLU).
no code implementations • 24 Oct 2023 • Mustafa Jarrar, Muhammad Abdul-Mageed, Mohammed Khalilia, Bashar Talafha, AbdelRahim Elmadany, Nagham Hamad, Alaa' Omar
The winning teams achieved F1 scores of 91. 96 and 93. 73 in FlatNER and NestedNER, respectively.
no code implementations • 17 Oct 2023 • Abdul Waheed, Bashar Talafha, Peter Sullivan, AbdelRahim Elmadany, Muhammad Abdul-Mageed
We train a wide range of models such as HuBERT (DID), Whisper, and XLS-R (ASR) in a supervised setting for Arabic DID and ASR tasks.
no code implementations • 5 Jun 2023 • Bashar Talafha, Abdul Waheed, Muhammad Abdul-Mageed
Whisper, the recently developed multilingual weakly supervised model, is reported to perform well on multiple speech recognition benchmarks in both monolingual and multilingual settings.
no code implementations • 11 Feb 2022 • Muhy Eddin Za'ter, Bashar Talafha
The results showed that the use of multi-task learning and pre-trained word embeddings noticeably enhanced the quality of image captioning, however the presented results shows that Arabic captioning still lags behind when compared to the English language.
no code implementations • 3 Aug 2021 • Bashar Talafha, Muhy Eddin Za'ter, Samer Suleiman, Mahmoud Al-Ayyoub, Mohammed N. Al-Kabi
The role of predicting sarcasm in the text is known as automatic sarcasm detection.
1 code implementation • COLING (WANLP) 2020 • Bashar Talafha, Mohammad Ali, Muhy Eddin Za'ter, Haitham Seelawi, Ibraheem Tuffaha, Mostafa Samir, Wael Farhan, Hussein T. Al-Natsheh
Our winning solution itself came in the form of an ensemble of different training iterations of our pre-trained BERT model, which achieved a micro-averaged F1-score of 26. 78% on the subtask at hand.
no code implementations • WS 2019 • Ahmad Ragab, Haitham Seelawi, Mostafa Samir, Abdelrahman Mattar, Hesham Al-Bataineh, Mohammad Zaghloul, Ahmad Mustafa, Bashar Talafha, Abed Alhakim Freihat, Hussein Al-Natsheh
In this paper we discuss several models we used to classify 25 city-level Arabic dialects in addition to Modern Standard Arabic (MSA) as part of MADAR shared task (sub-task 1).
no code implementations • WS 2019 • Bashar Talafha, Wael Farhan, Ahmed Altakrouri, Hussein Al-Natsheh
Arabic dialect identification is an inherently complex problem, as Arabic dialect taxonomy is convoluted and aims to dissect a continuous space rather than a discrete one.
no code implementations • WS 2019 • Bashar Talafha, Ali Fadel, Mahmoud Al-Ayyoub, Yaser Jararweh, Mohammad AL-Smadi, Patrick Juola
In this paper, we describe our team{'}s effort on the MADAR Shared Task on Arabic Fine-Grained Dialect Identification.