no code implementations • ACL 2022 • Sicheng Yu, Qianru Sun, Hao Zhang, Jing Jiang
Translate-train is a general training approach to multilingual tasks.
no code implementations • 1 Apr 2024 • Xiongwei Wu, Sicheng Yu, Ee-Peng Lim, Chong-Wah Ngo
The pre-training phase equips FoodLearner with the capability to align visual information with corresponding textual representations that are specifically related to food, while the second phase adapts both the FoodLearner and the Image-Informed Text Encoder for the segmentation task.
no code implementations • 3 Feb 2024 • Cunxiao Du, Jing Jiang, Xu Yuanchen, Jiawei Wu, Sicheng Yu, Yongqi Li, Shenggui Li, Kai Xu, Liqiang Nie, Zhaopeng Tu, Yang You
Speculative decoding is a relatively new decoding framework that leverages small and efficient draft models to reduce the latency of LLMs.
1 code implementation • Findings (EMNLP) 2021 • Qiyuan Zhang, Lei Wang, Sicheng Yu, Shuohang Wang, Yang Wang, Jing Jiang, Ee-Peng Lim
While diverse question answering (QA) datasets have been proposed and contributed significantly to the development of deep learning models for QA tasks, the existing datasets fall short in two aspects.
1 code implementation • ACL 2021 • Sicheng Yu, Hao Zhang, Yulei Niu, Qianru Sun, Jing Jiang
Pre-trained multilingual language models, e. g., multilingual-BERT, are widely used in cross-lingual tasks, yielding the state-of-the-art performance.
1 code implementation • 12 Oct 2020 • Sicheng Yu, Yulei Niu, Shuohang Wang, Jing Jiang, Qianru Sun
We then conduct two novel CVC inference methods (on trained models) to capture the effect of comprehensive reasoning as the final prediction.
no code implementations • 6 Oct 2020 • Sicheng Yu, Hao Zhang, Wei Jing, Jing Jiang
In addition to the effective reduction of human efforts of our approach compared, through extensive experiments on OpenbookQA, we show that the proposed approach outperforms the models that use the same backbone and more training data; and our parameter analysis also demonstrates the interpretability of our approach.