1 code implementation • Findings (EMNLP) 2021 • Zhenwen Liang, Xiangliang Zhang
Many existing works have demonstrated that language is a helpful guider for image understanding by neural networks.
1 code implementation • LREC 2022 • Reem Alghamdi, Zhenwen Liang, Xiangliang Zhang
In addition, a transfer learning model is built to let the high-resource Chinese MWP solver promote the performance of the low-resource Arabic MWP solver.
no code implementations • 20 Feb 2024 • Yujun Zhou, Yufei Han, Haomin Zhuang, Taicheng Guo, Kehan Guo, Zhenwen Liang, Hongyan Bao, Xiangliang Zhang
Large Language Models (LLMs) demonstrate remarkable capabilities across diverse applications.
1 code implementation • 12 Feb 2024 • Qingkai Zeng, Yuyang Bai, Zhaoxuan Tan, Shangbin Feng, Zhenwen Liang, Zhihan Zhang, Meng Jiang
Automatic taxonomy induction is crucial for web search, recommendation systems, and question answering.
no code implementations • 6 Feb 2024 • Zhenwen Liang, Kehan Guo, Gang Liu, Taicheng Guo, Yujun Zhou, Tianyu Yang, Jiajun Jiao, Renjie Pi, Jipeng Zhang, Xiangliang Zhang
The paper introduces SceMQA, a novel benchmark for scientific multimodal question answering at the college entrance level.
no code implementations • 31 Jan 2024 • Xiaodong Wu, Yufei Han, Hayssam Dahrouj, Jianbing Ni, Zhenwen Liang, Xiangliang Zhang
Machine teaching often involves the creation of an optimal (typically minimal) dataset to help a model (referred to as the `student') achieve specific goals given by a teacher.
no code implementations • 16 Jul 2023 • Zhenwen Liang, Dian Yu, Xiaoman Pan, Wenlin Yao, Qingkai Zeng, Xiangliang Zhang, Dong Yu
Our approach uniquely considers the various annotation formats as different "views" and leverages them in training the model.
1 code implementation • NeurIPS 2023 • Taicheng Guo, Kehan Guo, Bozhao Nan, Zhenwen Liang, Zhichun Guo, Nitesh V. Chawla, Olaf Wiest, Xiangliang Zhang
In this paper, rather than pursuing state-of-the-art performance, we aim to evaluate capabilities of LLMs in a wide range of tasks across the chemistry domain.
no code implementations • 23 May 2023 • Wenhao Yu, Zhihan Zhang, Zhenwen Liang, Meng Jiang, Ashish Sabharwal
ReFeed first generates initial outputs, then utilizes a retrieval model to acquire relevant information from large document collections, and finally incorporates the retrieved information into the in-context demonstration for output refinement, thereby addressing the limitations of LLMs in a more efficient and cost-effective manner.
no code implementations • 22 May 2023 • Zhenwen Liang, Wenhao Yu, Tanmay Rajpurohit, Peter Clark, Xiangliang Zhang, Ashwin Kaylan
In this paper, we present a novel approach for distilling math word problem solving capabilities from large language models (LLMs) into smaller, more efficient student models.
1 code implementation • 1 Dec 2022 • Zhenwen Liang, Jipeng Zhang, Xiangliang Zhang
In this paper, we propose to build a novel MWP solver by leveraging analogical MWPs, which advance the solver's generalization ability across different kinds of MWPs.
1 code implementation • 1 Dec 2022 • Zhenwen Liang, Jipeng Zhang, Lei Wang, Yan Wang, Jie Shao, Xiangliang Zhang
In this paper, we design a new training framework for an MWP solver by introducing a solution buffer and a solution discriminator.
no code implementations • 30 Dec 2021 • Yingquan Li, Zhenwen Liang, Ibrahima N'Doye, Xiangliang Zhang, Mohamed-Slim Alouini, Taous-Meriem Laleg-Kirati
Light-Emitting Diodes (LEDs) based underwater optical wireless communications (UOWCs), a technology with low latency and high data rates, have attracted significant importance for underwater robots.
1 code implementation • Findings (NAACL) 2022 • Zhenwen Liang, Jipeng Zhang, Lei Wang, Wei Qin, Yunshi Lan, Jie Shao, Xiangliang Zhang
Math word problem (MWP) solving faces a dilemma in number representation learning.
Ranked #5 on Math Word Problem Solving on MathQA