Search Results for author: Mohamed Afify

Found 8 papers, 1 papers with code

Language Tokens: Simply Improving Zero-Shot Multi-Aligned Translation in Encoder-Decoder Models

no code implementations AMTA 2022 Muhammad N ElNokrashy, Amr Hendy, Mohamed Maher, Mohamed Afify, Hany Hassan

In a WMT-based setting, we see 1. 3 and 0. 4 BLEU points improvement for the zero-shot setting, and when using direct data for training, respectively, while from-English performance improves by 4. 17 and 0. 85 BLEU points.

Translation

How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation

1 code implementation18 Feb 2023 Amr Hendy, Mohamed Abdelrehim, Amr Sharaf, Vikas Raunak, Mohamed Gabr, Hitokazu Matsushita, Young Jin Kim, Mohamed Afify, Hany Hassan Awadalla

In this paper, we present a comprehensive evaluation of GPT models for machine translation, covering various aspects such as quality of different GPT models in comparison with state-of-the-art research and commercial systems, effect of prompting strategies, robustness towards domain shifts and document-level translation.

Machine Translation Text Generation +1

Language Tokens: A Frustratingly Simple Approach Improves Zero-Shot Performance of Multilingual Translation

no code implementations11 Aug 2022 Muhammad ElNokrashy, Amr Hendy, Mohamed Maher, Mohamed Afify, Hany Hassan Awadalla

In a WMT evaluation campaign, From-English performance improves by 4. 17 and 2. 87 BLEU points, in the zero-shot setting, and when direct data is available for training, respectively.

Translation

Score Combination for Improved Parallel Corpus Filtering for Low Resource Conditions

no code implementations WMT (EMNLP) 2020 Muhammad N. ElNokrashy, Amr Hendy, Mohamed Abdelghaffar, Mohamed Afify, Ahmed Tawfik, Hany Hassan Awadalla

For the mBART finetuning setup, provided by the organizers, our method shows 7% and 5% relative improvement over baseline, in sacreBLEU score on the test set for Pashto and Khmer respectively.

Sentence

Text-Independent Speaker Verification Based on Deep Neural Networks and Segmental Dynamic Time Warping

no code implementations26 Jun 2018 Mohamed Adel, Mohamed Afify, Akram Gaballah

The d-vectors, generated from a feed forward deep neural network trained to distinguish between speakers, are used as features to perform alignment and hence calculate the overall distance between the enrolment and test utterances. We present results on the NIST 2008 data set for speaker verification where the proposed method outperforms the conventional i-vector baseline with PLDA scores and outperforms d-vector approach with local distances based on cosine and PLDA scores.

Dynamic Time Warping Text-Independent Speaker Verification

Cannot find the paper you are looking for? You can Submit a new open access paper.