no code implementations • EACL (AdaptNLP) 2021 • Sara Meftah, Nasredine Semmar, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat
Transfer Learning has been shown to be a powerful tool for Natural Language Processing (NLP) and has outperformed the standard supervised learning paradigm, as it takes benefit from the pre-learned knowledge.
1 code implementation • 30 Jun 2023 • Mehrad Moradshahi, Tianhao Shen, Kalika Bali, Monojit Choudhury, Gaël de Chalendar, Anmol Goel, Sungkyun Kim, Prashant Kodali, Ponnurangam Kumaraguru, Nasredine Semmar, Sina J. Semnani, Jiwon Seo, Vivek Seshadri, Manish Shrivastava, Michael Sun, Aditya Yadavalli, Chaobin You, Deyi Xiong, Monica S. Lam
We create a new multilingual benchmark, X-RiSAWOZ, by translating the Chinese RiSAWOZ to 4 languages: English, French, Hindi, Korean; and a code-mixed English-Hindi language.
no code implementations • 9 Jun 2021 • Sara Meftah, Nasredine Semmar, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat
In the standard fine-tuning scheme of TL, a model is initially pre-trained on a source domain and subsequently fine-tuned on a target domain and, therefore, source and target domains are trained using the same architecture.
no code implementations • WS 2020 • Sara Meftah, Nasredine Semmar, Mohamed-Ayoub Tahiri, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat
Two prevalent transfer learning approaches are used in recent works to improve neural networks performance for domains with small amounts of annotated data: Multi-task learning which involves training the task of interest with related auxiliary tasks to exploit their underlying similarities, and Mono-task fine-tuning, where the weights of the model are initialized with the pretrained weights of a large-scale labeled source domain and then fine-tuned with labeled data of the target domain (domain of interest).
no code implementations • JEPTALNRECITAL 2019 • Sara Meftah, Nasredine Semmar, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat
L{'}apprentissage par transfert repr{\'e}sente la capacit{\'e} qu{'}un mod{\`e}le neuronal entra{\^\i}n{\'e} sur une t{\^a}che {\`a} g{\'e}n{\'e}raliser suffisamment et correctement pour produire des r{\'e}sultats pertinents sur une autre t{\^a}che proche mais diff{\'e}rente.
no code implementations • NAACL 2019 • Sara Meftah, Youssef Tamaazousti, Nasredine Semmar, Hassane Essafi, Fatiha Sadat
Fine-tuning neural networks is widely used to transfer valuable knowledge from high-resource to low-resource domains.
Ranked #1 on Part-Of-Speech Tagging on Social media
no code implementations • COLING 2018 • Sara Meftah, Nasredine Semmar, Fatiha Sadat, Stephan Raaijmakers
In this paper, we describe a morpho-syntactic tagger of tweets, an important component of the CEA List DeepLIMA tool which is a multilingual text analysis platform based on deep learning.
no code implementations • WS 2018 • Wafia Adouane, Simon Dobnik, Jean-Philippe Bernardy, Nasredine Semmar
This paper seeks to examine the effect of including background knowledge in the form of character pre-trained neural language model (LM), and data bootstrapping to overcome the problem of unbalanced limited resources.
no code implementations • RANLP 2017 • Nasredine Semmar, Mariama Laib
We describe in this paper a hybrid ap-proach to build automatically bilingual lexicons of Multiword Expressions (MWEs) from parallel corpora.
no code implementations • WS 2016 • Wafia Adouane, Nasredine Semmar, Richard Johansson
The ALI standard methods require datasets for training and use character/word-based n-gram models.
no code implementations • WS 2016 • Wafia Adouane, Nasredine Semmar, Richard Johansson
In sub-task 2, which deals with Arabic dialect identification, the system achieved its best performance using character-based n-grams (49. 67{\%} accuracy), ranking fourth in the closed track (the best result being 51. 16{\%}), and an accuracy of 53. 18{\%}, ranking first in the open track.
no code implementations • WS 2016 • Wafia Adouane, Nasredine Semmar, Richard Johansson, Victoria Bobicev
Automatic Language Identification (ALI) is the detection of the natural language of an input text by a machine.
no code implementations • COLING 2016 • Othman Zennaki, Nasredine Semmar, Laurent Besacier
This work focuses on the rapid development of linguistic annotation tools for resource-poor languages.
no code implementations • JEPTALNRECITAL 2016 • Nasredine Semmar, Othman Zennaki, Meriama Laib
La traduction automatique statistique bien que performante est aujourd{'}hui limit{\'e}e parce qu{'}elle n{\'e}cessite de gros volumes de corpus parall{\`e}les qui n{'}existent pas pour tous les couples de langues et toutes les sp{\'e}cialit{\'e}s et que leur production est lente et co{\^u}teuse.
no code implementations • JEPTALNRECITAL 2016 • Othman Zennaki, Nasredine Semmar, Laurent Besacier
Dans une pr{\'e}c{\'e}dente contribution, nous avons propos{\'e} une m{\'e}thode pour la construction automatique d{'}un analyseur morpho-syntaxique via une projection interlingue d{'}annotations linguistiques {\`a} partir de corpus parall{\`e}les (m{\'e}thode fond{\'e}e sur les r{\'e}seaux de neurones r{\'e}currents).
no code implementations • JEPTALNRECITAL 2015 • Othman Zennaki, Nasredine Semmar, Laurent Besacier
La construction d{'}outils d{'}analyse linguistique pour les langues faiblement dot{\'e}es est limit{\'e}e, entre autres, par le manque de corpus annot{\'e}s. Dans cet article, nous proposons une m{\'e}thode pour construire automatiquement des outils d{'}analyse via une projection interlingue d{'}annotations linguistiques en utilisant des corpus parall{\`e}les.
no code implementations • JEPTALNRECITAL 2014 • Nasredine Semmar, Houda Saadane
no code implementations • JEPTALNRECITAL 2012 • Houda Saadane, Nasredine Semmar
no code implementations • LREC 2012 • Dhouha Bouamor, Nasredine Semmar, Pierre Zweigenbaum
MultiWord Expressions (MWEs) repesent a key issue for numerous applications in Natural Language Processing (NLP) especially for Machine Translation (MT).