1 code implementation • Findings (NAACL) 2022 • Ehsan Hosseini-Asl, Wenhao Liu, Caiming Xiong
Our evaluation results on the single-task polarity prediction show that our approach outperforms the previous state-of-the-art (based on BERT) on average performance by a large margins in few-shot and full-shot settings.
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +4
1 code implementation • EACL 2021 • Tianxing He, Bryan McCann, Caiming Xiong, Ehsan Hosseini-Asl
In this work, we explore joint energy-based model (EBM) training during the finetuning of pretrained text encoders (e. g., Roberta) for natural language understanding (NLU) tasks.
1 code implementation • NeurIPS 2020 • Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, Richard Socher
Task-oriented dialogue is often decomposed into three tasks: understanding user input, deciding actions, and generating a response.
Ranked #2 on Response Generation on MMConv
2 code implementations • ACL 2019 • Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini-Asl, Caiming Xiong, Richard Socher, Pascale Fung
Over-dependence on domain ontology and lack of knowledge sharing across domains are two practical and yet less studied problems of dialogue state tracking.
Ranked #15 on Multi-domain Dialogue State Tracking on MULTIWOZ 2.0
Dialogue State Tracking Multi-domain Dialogue State Tracking +2
1 code implementation • 3 Dec 2018 • Elnaz Nouri, Ehsan Hosseini-Asl
The latency in the current neural based dialogue state tracking models prohibits them from being used efficiently for deployment in production systems, albeit their highly accurate performance.
Ranked #7 on Dialogue State Tracking on Wizard-of-Oz
Dialogue State Tracking Multi-domain Dialogue State Tracking
2 code implementations • ICLR 2019 • Ehsan Hosseini-Asl, Yingbo Zhou, Caiming Xiong, Richard Socher
In low-resource supervised setting, the results show that our approach improves absolute performance by 14% and 4% when adapting SVHN to MNIST and vice versa, respectively, which outperforms unsupervised domain adaptation methods that require high-resource unlabeled target domain.
no code implementations • 27 Mar 2018 • Ehsan Hosseini-Asl, Yingbo Zhou, Caiming Xiong, Richard Socher
Domain adaptation plays an important role for speech recognition models, in particular, for domains that have low resources.
1 code implementation • 2 Jul 2016 • Ehsan Hosseini-Asl, Robert Keynto, Ayman El-Baz
The 3D-CNN is built upon a 3D convolutional autoencoder, which is pre-trained to capture anatomical shape variations in structural brain MRI scans.
1 code implementation • 2 Jul 2016 • Ehsan Hosseini-Asl, Georgy Gimel'farb, Ayman El-Baz
The 3D-CNN is built upon a 3D convolutional autoencoder, which is pre-trained to capture anatomical shape variations in structural brain MRI scans.
no code implementations • 17 Apr 2016 • Ehsan Hosseini-Asl
This paper aims to improve the feature learning in Convolutional Networks (Convnet) by capturing the structure of objects.
no code implementations • 12 Jan 2016 • Ehsan Hosseini-Asl, Jacek M. Zurada, Olfa Nasraoui
We demonstrate a new deep learning autoencoder network, trained by a nonnegativity constraint algorithm (NCAE), that learns features which show part-based representation of data.
no code implementations • 13 Nov 2015 • Ehsan Hosseini-Asl, Angshuman Guha
In this paper, we propose a new text recognition model based on measuring the visual similarity of text and predicting the content of unlabeled texts.