Search Results for author: Ahmed El-Roby

Found 7 papers, 0 papers with code

Co-Regularized Adversarial Learning for Multi-Domain Text Classification

no code implementations30 Jan 2022 Yuan Wu, Diana Inkpen, Ahmed El-Roby

Multi-domain text classification (MDTC) aims to leverage all available resources from multiple domains to learn a predictive model that can generalize well on these domains.

text-classification Text Classification

Maximum Batch Frobenius Norm for Multi-Domain Text Classification

no code implementations29 Jan 2022 Yuan Wu, Diana Inkpen, Ahmed El-Roby

Multi-domain text classification (MDTC) has obtained remarkable achievements due to the advent of deep learning.

text-classification Text Classification

Towards Category and Domain Alignment: Category-Invariant Feature Enhancement for Adversarial Domain Adaptation

no code implementations14 Aug 2021 Yuan Wu, Diana Inkpen, Ahmed El-Roby

Adversarial domain adaptation has made impressive advances in transferring knowledge from the source domain to the target domain by aligning feature distributions of both domains.

Domain Adaptation

Conditional Adversarial Networks for Multi-Domain Text Classification

no code implementations EACL (AdaptNLP) 2021 Yuan Wu, Diana Inkpen, Ahmed El-Roby

We provide theoretical analysis for the CAN framework, showing that CAN's objective is equivalent to minimizing the total divergence among multiple joint distributions of shared features and label predictions.

General Classification text-classification +1

Mixup Regularized Adversarial Networks for Multi-Domain Text Classification

no code implementations31 Jan 2021 Yuan Wu, Diana Inkpen, Ahmed El-Roby

Using the shared-private paradigm and adversarial training has significantly improved the performances of multi-domain text classification (MDTC) models.

General Classification text-classification +1

Dual Adversarial Training for Unsupervised Domain Adaptation

no code implementations1 Jan 2021 Yuan Wu, Diana Inkpen, Ahmed El-Roby

Domain adaptation sets out to address this problem, aiming to leverage labeled data in the source domain to learn a good predictive model for the target domain whose labels are scarce or unavailable.

Unsupervised Domain Adaptation

Dual Mixup Regularized Learning for Adversarial Domain Adaptation

no code implementations ECCV 2020 Yuan Wu, Diana Inkpen, Ahmed El-Roby

Second, samples from the source and target domains alone are not sufficient for domain-invariant feature extracting in the latent space.

Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.