no code implementations • 12 Feb 2024 • Hongyun Zhou, Xiangyu Lu, Wang Xu, Conghui Zhu, Tiejun Zhao
Low-Rank Adaptation (LoRA) introduces auxiliary parameters for each layer to fine-tune the pre-trained model under limited computing resources.
1 code implementation • Findings (NAACL) 2022 • Zhen Li, Bing Xu, Conghui Zhu, Tiejun Zhao
Compared with unimodal data, multimodal data can provide more features to help the model analyze the sentiment of data.
1 code implementation • 20 Aug 2021 • Changzhen Ji, Yating Zhang, Xiaozhong Liu, Adam Jatowt, Changlong Sun, Conghui Zhu, Tiejun Zhao
Nevertheless, few works utilized the knowledge extracted from similar conversations for utterance generation.
no code implementations • 1 Jan 2021 • Guanlin Li, Lemao Liu, Taro Watanabe, Conghui Zhu, Tiejun Zhao
Unsupervised Neural Machine Translation or UNMT has received great attention in recent years.
1 code implementation • EMNLP 2020 • Changzhen Ji, Xin Zhou, Yating Zhang, Xiaozhong Liu, Changlong Sun, Conghui Zhu, Tiejun Zhao
In the past few years, audiences from different fields witness the achievements of sequence-to-sequence models (e. g., LSTM+attention, Pointer Generator Networks, and Transformer) to enhance dialogue content generation.
no code implementations • 22 Oct 2020 • Changzhen Ji, Xin Zhou, Conghui Zhu, Tiejun Zhao
The multi-role judicial debate composed of the plaintiff, defendant, and judge is an important part of the judicial trial.
no code implementations • 15 Oct 2020 • Guanhua Zhang, Bing Bai, Jian Liang, Kun Bai, Conghui Zhu, Tiejun Zhao
Recent studies show that crowd-sourced Natural Language Inference (NLI) datasets may suffer from significant biases like annotation artifacts.
1 code implementation • ACL 2020 • Guanhua Zhang, Bing Bai, Junqi Zhang, Kun Bai, Conghui Zhu, Tiejun Zhao
In this paper, we formalize the unintended biases in text classification datasets as a kind of selection bias from the non-discrimination distribution to the discrimination distribution.
no code implementations • 5 Apr 2020 • Conghui Zhu, Guanlin Li, Lemao Liu, Tiejun Zhao, Shuming Shi
Despite the great success of NMT, there still remains a severe challenge: it is hard to interpret the internal dynamics during its training process.
no code implementations • 5 Apr 2020 • Guanlin Li, Lemao Liu, Conghui Zhu, Tiejun Zhao, Shuming Shi
Generalization to unseen instances is our eternal pursuit for all data-driven models.
no code implementations • 28 Feb 2020 • Chaoqun Duan, Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Conghui Zhu, Tiejun Zhao
Existing neural machine translation (NMT) systems utilize sequence-to-sequence neural networks to generate target translation word by word, and then make the generated word at each time-step and the counterpart in the references as consistent as possible.
no code implementations • 7 Feb 2020 • Chaoqun Duan, Lei Cui, Shuming Ma, Furu Wei, Conghui Zhu, Tiejun Zhao
In this work, we aim to improve the relevance between live comments and videos by modeling the cross-modal interactions among different modalities.
no code implementations • IJCNLP 2019 • Guanlin Li, Lemao Liu, Guoping Huang, Conghui Zhu, Tiejun Zhao
Many Data Augmentation (DA) methods have been proposed for neural machine translation.
no code implementations • 10 Sep 2019 • Guanhua Zhang, Bing Bai, Junqi Zhang, Kun Bai, Conghui Zhu, Tiejun Zhao
This irregularity makes the evaluation results over-estimated and affects models' generalization ability.
no code implementations • NAACL 2019 • Guanlin Li, Lemao Liu, Xintong Li, Conghui Zhu, Tiejun Zhao, Shuming Shi
Multilayer architectures are currently the gold standard for large-scale neural machine translation.
2 code implementations • ACL 2019 • Guanhua Zhang, Bing Bai, Jian Liang, Kun Bai, Shiyu Chang, Mo Yu, Conghui Zhu, Tiejun Zhao
Natural Language Sentence Matching (NLSM) has gained substantial attention from both academics and the industry, and rich public datasets contribute a lot to this process.
no code implementations • 1 Dec 2015 • Yiming Cui, Conghui Zhu, Xiaoning Zhu, Tiejun Zhao
Pivot language is employed as a way to solve the data sparseness problem in machine translation, especially when the data for a particular language pair does not exist.