Search Results for author: Sara Meftah

Found 7 papers, 0 papers with code

On the Hidden Negative Transfer in Sequential Transfer Learning for Domain Adaptation from News to Tweets

no code implementations EACL (AdaptNLP) 2021 Sara Meftah, Nasredine Semmar, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat

Transfer Learning has been shown to be a powerful tool for Natural Language Processing (NLP) and has outperformed the standard supervised learning paradigm, as it takes benefit from the pre-learned knowledge.

Chunking Domain Adaptation +5

Neural Supervised Domain Adaptation by Augmenting Pre-trained Models with Random Units

no code implementations9 Jun 2021 Sara Meftah, Nasredine Semmar, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat

In the standard fine-tuning scheme of TL, a model is initially pre-trained on a source domain and subsequently fine-tuned on a target domain and, therefore, source and target domains are trained using the same architecture.

Chunking Domain Adaptation +5

Multi-Task Supervised Pretraining for Neural Domain Adaptation

no code implementations WS 2020 Sara Meftah, Nasredine Semmar, Mohamed-Ayoub Tahiri, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat

Two prevalent transfer learning approaches are used in recent works to improve neural networks performance for domains with small amounts of annotated data: Multi-task learning which involves training the task of interest with related auxiliary tasks to exploit their underlying similarities, and Mono-task fine-tuning, where the weights of the model are initialized with the pretrained weights of a large-scale labeled source domain and then fine-tuned with labeled data of the target domain (domain of interest).

Domain Adaptation Multi-Task Learning

Exploration de l'apprentissage par transfert pour l'analyse de textes des r\'eseaux sociaux (Exploring neural transfer learning for social media text analysis )

no code implementations JEPTALNRECITAL 2019 Sara Meftah, Nasredine Semmar, Youssef Tamaazousti, Hassane Essafi, Fatiha Sadat

L{'}apprentissage par transfert repr{\'e}sente la capacit{\'e} qu{'}un mod{\`e}le neuronal entra{\^\i}n{\'e} sur une t{\^a}che {\`a} g{\'e}n{\'e}raliser suffisamment et correctement pour produire des r{\'e}sultats pertinents sur une autre t{\^a}che proche mais diff{\'e}rente.

Transfer Learning

Using Neural Transfer Learning for Morpho-syntactic Tagging of South-Slavic Languages Tweets

no code implementations COLING 2018 Sara Meftah, Nasredine Semmar, Fatiha Sadat, Stephan Raaijmakers

In this paper, we describe a morpho-syntactic tagger of tweets, an important component of the CEA List DeepLIMA tool which is a multilingual text analysis platform based on deep learning.

Part-Of-Speech Tagging Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.