no code implementations • 5 Dec 2023 • Xuan Long Do, Yiran Zhao, Hannah Brown, Yuxi Xie, James Xu Zhao, Nancy F. Chen, Kenji Kawaguchi, Michael Qizhe Xie, Junxian He
We propose a new method, Adversarial In-Context Learning (adv-ICL), to optimize prompt for in-context learning (ICL) by employing one LLM as a generator, another as a discriminator, and a third as a prompt modifier.
1 code implementation • 31 Oct 2023 • Kaixin Li, Qisheng Hu, Xu Zhao, Hui Chen, Yuxi Xie, Tiedong Liu, Qizhe Xie, Junxian He
In this work, we explore the use of Large Language Models (LLMs) to edit code based on user instructions.
1 code implementation • 24 May 2023 • Yuxi Xie, Guanzhen Li, Min-Yen Kan
We introduce ECHo (Event Causality Inference via Human-Centric Reasoning), a diagnostic dataset of event causality inference grounded in visio-linguistic social scenarios.
1 code implementation • 23 May 2023 • James Xu Zhao, Yuxi Xie, Kenji Kawaguchi, Junxian He, Michael Qizhe Xie
Chain-of-Thought (CoT) and Program-Aided Language Models (PAL) represent two distinct reasoning methods, each with its own strengths.
Ranked #1 on Math Word Problem Solving on SVAMP
1 code implementation • Github 2023 • Qisheng Hu*, Kaixin Li*, Xu Zhao, Yuxi Xie, Tiedong Liu, Hui Chen, Qizhe Xie, Junxian He
In this work, we explore the use of large language models (LLMs) to edit code based on user instructions, covering a broad range of implicit tasks such as comment insertion, code optimization, and code refactoring.
no code implementations • NeurIPS 2023 • Yuxi Xie, Kenji Kawaguchi, Yiran Zhao, Xu Zhao, Min-Yen Kan, Junxian He, Qizhe Xie
Stochastic beam search balances exploitation and exploration of the search space with temperature-controlled randomness.
no code implementations • 1 Jan 2021 • Yuxi Xie, Danqing Huang, Jinpeng Wang, Chin-Yew Lin
Layout representation, which models visual elements in a canvas and their inter-relations, plays a crucial role in graphic design intelligence.
1 code implementation • COLING 2020 • Yuxi Xie, Liangming Pan, Dongzhe Wang, Min-Yen Kan, Yansong Feng
Recent question generation (QG) approaches often utilize the sequence-to-sequence framework (Seq2Seq) to optimize the log-likelihood of ground-truth questions using teacher forcing.
1 code implementation • ACL 2020 • Liangming Pan, Yuxi Xie, Yansong Feng, Tat-Seng Chua, Min-Yen Kan
This paper proposes the problem of Deep Question Generation (DQG), which aims to generate complex questions that require reasoning over multiple pieces of information of the input passage.
1 code implementation • 2 Sep 2019 • Zechang Li, Yuxuan Lai, Yuxi Xie, Yansong Feng, Dongyan Zhao
The sketch is a high-level structure of the logical form exclusive of low-level details such as entities and predicates.