Search Results for author: Nasredine Semmar

Found 32 papers, 1 papers with code

On the Hidden Negative Transfer in Sequential Transfer Learning for Domain Adaptation from News to Tweets

no code implementations EACL (AdaptNLP) 2021 Sara Meftah, Nasredine Semmar, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat

Transfer Learning has been shown to be a powerful tool for Natural Language Processing (NLP) and has outperformed the standard supervised learning paradigm, as it takes benefit from the pre-learned knowledge.

Chunking Domain Adaptation +5

Neural Supervised Domain Adaptation by Augmenting Pre-trained Models with Random Units

no code implementations9 Jun 2021 Sara Meftah, Nasredine Semmar, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat

In the standard fine-tuning scheme of TL, a model is initially pre-trained on a source domain and subsequently fine-tuned on a target domain and, therefore, source and target domains are trained using the same architecture.

Chunking Domain Adaptation +5

Multi-Task Supervised Pretraining for Neural Domain Adaptation

no code implementations WS 2020 Sara Meftah, Nasredine Semmar, Mohamed-Ayoub Tahiri, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat

Two prevalent transfer learning approaches are used in recent works to improve neural networks performance for domains with small amounts of annotated data: Multi-task learning which involves training the task of interest with related auxiliary tasks to exploit their underlying similarities, and Mono-task fine-tuning, where the weights of the model are initialized with the pretrained weights of a large-scale labeled source domain and then fine-tuned with labeled data of the target domain (domain of interest).

Domain Adaptation Multi-Task Learning

Exploration de l'apprentissage par transfert pour l'analyse de textes des r\'eseaux sociaux (Exploring neural transfer learning for social media text analysis )

no code implementations JEPTALNRECITAL 2019 Sara Meftah, Nasredine Semmar, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat

L{'}apprentissage par transfert repr{\'e}sente la capacit{\'e} qu{'}un mod{\`e}le neuronal entra{\^\i}n{\'e} sur une t{\^a}che {\`a} g{\'e}n{\'e}raliser suffisamment et correctement pour produire des r{\'e}sultats pertinents sur une autre t{\^a}che proche mais diff{\'e}rente.

Transfer Learning

Using Neural Transfer Learning for Morpho-syntactic Tagging of South-Slavic Languages Tweets

no code implementations COLING 2018 Sara Meftah, Nasredine Semmar, Fatiha Sadat, Stephan Raaijmakers

In this paper, we describe a morpho-syntactic tagger of tweets, an important component of the CEA List DeepLIMA tool which is a multilingual text analysis platform based on deep learning.

Part-Of-Speech Tagging Transfer Learning

A Comparison of Character Neural Language Model and Bootstrapping for Language Identification in Multilingual Noisy Texts

no code implementations WS 2018 Wafia Adouane, Simon Dobnik, Jean-Philippe Bernardy, Nasredine Semmar

This paper seeks to examine the effect of including background knowledge in the form of character pre-trained neural language model (LM), and data bootstrapping to overcome the problem of unbalanced limited resources.

Language Identification Language Modelling +1

ASIREM Participation at the Discriminating Similar Languages Shared Task 2016

no code implementations WS 2016 Wafia Adouane, Nasredine Semmar, Richard Johansson

In sub-task 2, which deals with Arabic dialect identification, the system achieved its best performance using character-based n-grams (49. 67{\%} accuracy), ranking fourth in the closed track (the best result being 51. 16{\%}), and an accuracy of 53. 18{\%}, ranking first in the open track.

Dialect Identification Task 2

Etude de l'impact d'un lexique bilingue sp\'ecialis\'e sur la performance d'un moteur de traduction \`a base d'exemples (Studying the impact of a specialized bilingual lexicon on the performance of an example-based machine translation engine)

no code implementations JEPTALNRECITAL 2016 Nasredine Semmar, Othman Zennaki, Meriama Laib

La traduction automatique statistique bien que performante est aujourd{'}hui limit{\'e}e parce qu{'}elle n{\'e}cessite de gros volumes de corpus parall{\`e}les qui n{'}existent pas pour tous les couples de langues et toutes les sp{\'e}cialit{\'e}s et que leur production est lente et co{\^u}teuse.

Machine Translation

Projection Interlingue d'\'Etiquettes pour l'Annotation S\'emantique Non Supervis\'ee (Cross-lingual Annotation Projection for Unsupervised Semantic Tagging)

no code implementations JEPTALNRECITAL 2016 Othman Zennaki, Nasredine Semmar, Laurent Besacier

Dans une pr{\'e}c{\'e}dente contribution, nous avons propos{\'e} une m{\'e}thode pour la construction automatique d{'}un analyseur morpho-syntaxique via une projection interlingue d{'}annotations linguistiques {\`a} partir de corpus parall{\`e}les (m{\'e}thode fond{\'e}e sur les r{\'e}seaux de neurones r{\'e}currents).

Utilisation des r\'eseaux de neurones r\'ecurrents pour la projection interlingue d'\'etiquettes morpho-syntaxiques \`a partir d'un corpus parall\`ele

no code implementations JEPTALNRECITAL 2015 Othman Zennaki, Nasredine Semmar, Laurent Besacier

La construction d{'}outils d{'}analyse linguistique pour les langues faiblement dot{\'e}es est limit{\'e}e, entre autres, par le manque de corpus annot{\'e}s. Dans cet article, nous proposons une m{\'e}thode pour construire automatiquement des outils d{'}analyse via une projection interlingue d{'}annotations linguistiques en utilisant des corpus parall{\`e}les.

Identifying bilingual Multi-Word Expressions for Statistical Machine Translation

no code implementations LREC 2012 Dhouha Bouamor, Nasredine Semmar, Pierre Zweigenbaum

MultiWord Expressions (MWEs) repesent a key issue for numerous applications in Natural Language Processing (NLP) especially for Machine Translation (MT).

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.