Search Results for author: Bailin Wang

Found 29 papers, 25 papers with code

Learning to Decode Collaboratively with Multiple Language Models

1 code implementation6 Mar 2024 Shannon Zejiang Shen, Hunter Lang, Bailin Wang, Yoon Kim, David Sontag

We propose a method to teach multiple large language models (LLM) to collaborate by interleaving their generations at the token level.

Instruction Following

In-Context Language Learning: Architectures and Algorithms

1 code implementation23 Jan 2024 Ekin Akyürek, Bailin Wang, Yoon Kim, Jacob Andreas

Finally, we show that hard-wiring these heads into neural models improves performance not just on ICLL, but natural language modeling -- improving the perplexity of 340M-parameter models by up to 1. 14 points (6. 7%) on the SlimPajama dataset.

In-Context Learning Language Modelling

Structured Code Representations Enable Data-Efficient Adaptation of Code Language Models

no code implementations19 Jan 2024 Mayank Agarwal, Yikang Shen, Bailin Wang, Yoon Kim, Jie Chen

In this work, we explore data-efficient adaptation of pre-trained code models by further pre-training and fine-tuning them with program structures.

Gated Linear Attention Transformers with Hardware-Efficient Training

2 code implementations11 Dec 2023 Songlin Yang, Bailin Wang, Yikang Shen, Rameswar Panda, Yoon Kim

When used as a replacement for the standard attention layer in Transformers, the resulting gated linear attention (GLA) Transformer is found to perform competitively against the LLaMA-architecture Transformer (Touvron et al., 2023) as well recent linear-time-inference baselines such as RetNet(Sun et al., 2023a) and Mamba (Gu & Dao, 2023) on moderate-scale language modeling experiments.

Language Modelling

Explain-then-Translate: An Analysis on Improving Program Translation with Self-generated Explanations

1 code implementation13 Nov 2023 Zilu Tang, Mayank Agarwal, Alex Shypula, Bailin Wang, Derry Wijaya, Jie Chen, Yoon Kim

This work explores the use of self-generated natural language explanations as an intermediate step for code-to-code translation with language models.

Code Translation Translation

Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement

1 code implementation12 Oct 2023 Linlu Qiu, Liwei Jiang, Ximing Lu, Melanie Sclar, Valentina Pyatkin, Chandra Bhagavatula, Bailin Wang, Yoon Kim, Yejin Choi, Nouha Dziri, Xiang Ren

The ability to derive underlying principles from a handful of observations and then generalize to novel situations -- known as inductive reasoning -- is central to human intelligence.

Lemur: Harmonizing Natural Language and Code for Language Agents

1 code implementation10 Oct 2023 Yiheng Xu, Hongjin Su, Chen Xing, Boyu Mi, Qian Liu, Weijia Shi, Binyuan Hui, Fan Zhou, Yitao Liu, Tianbao Xie, Zhoujun Cheng, Siheng Zhao, Lingpeng Kong, Bailin Wang, Caiming Xiong, Tao Yu

We introduce Lemur and Lemur-Chat, openly accessible language models optimized for both natural language and coding capabilities to serve as the backbone of versatile language agents.

An Investigation of LLMs' Inefficacy in Understanding Converse Relations

1 code implementation8 Oct 2023 Chengwen Qi, Bowen Li, Binyuan Hui, Bailin Wang, Jinyang Li, Jinwang Wu, Yuanjun Laili

Our ConvRE features two tasks, Re2Text and Text2Re, which are formulated as multi-choice question answering to evaluate LLMs' ability to determine the matching between relations and associated text.

Knowledge Graph Completion Question Answering +1

GenSim: Generating Robotic Simulation Tasks via Large Language Models

1 code implementation2 Oct 2023 Lirui Wang, Yiyang Ling, Zhecheng Yuan, Mohit Shridhar, Chen Bao, Yuzhe Qin, Bailin Wang, Huazhe Xu, Xiaolong Wang

Collecting large amounts of real-world interaction data to train general robotic policies is often prohibitively expensive, thus motivating the use of simulation data.

Code Generation

Improving Generalization in Language Model-Based Text-to-SQL Semantic Parsing: Two Simple Semantic Boundary-Based Techniques

1 code implementation27 May 2023 Daking Rai, Bailin Wang, Yilun Zhou, Ziyu Yao

Compositional and domain generalization present significant challenges in semantic parsing, even for state-of-the-art semantic parsers based on pre-trained language models (LMs).

Domain Generalization Language Modelling +2

Can LLM Already Serve as A Database Interface? A BIg Bench for Large-Scale Database Grounded Text-to-SQLs

no code implementations NeurIPS 2023 Jinyang Li, Binyuan Hui, Ge Qu, Jiaxi Yang, Binhua Li, Bowen Li, Bailin Wang, Bowen Qin, Rongyu Cao, Ruiying Geng, Nan Huo, Xuanhe Zhou, Chenhao Ma, Guoliang Li, Kevin C. C. Chang, Fei Huang, Reynold Cheng, Yongbin Li

Our emphasis on database values highlights the new challenges of dirty database contents, external knowledge between NL questions and database contents, and SQL efficiency, particularly in the context of massive databases.

Semantic Parsing SQL Parsing +1

Explaining Large Language Model-Based Neural Semantic Parsers (Student Abstract)

no code implementations25 Jan 2023 Daking Rai, Yilun Zhou, Bailin Wang, Ziyu Yao

While large language models (LLMs) have demonstrated strong capability in structured prediction tasks such as semantic parsing, few amounts of research have explored the underlying mechanisms of their success.

Language Modelling Large Language Model +2

Hierarchical Phrase-based Sequence-to-Sequence Learning

1 code implementation15 Nov 2022 Bailin Wang, Ivan Titov, Jacob Andreas, Yoon Kim

We describe a neural transducer that maintains the flexibility of standard sequence-to-sequence (seq2seq) models while incorporating hierarchical phrases as a source of inductive bias during training and as explicit constraints during inference.

Inductive Bias Machine Translation +2

Proton: Probing Schema Linking Information from Pre-trained Language Models for Text-to-SQL Parsing

2 code implementations28 Jun 2022 Lihan Wang, Bowen Qin, Binyuan Hui, Bowen Li, Min Yang, Bailin Wang, Binhua Li, Fei Huang, Luo Si, Yongbin Li

The importance of building text-to-SQL parsers which can be applied to new databases has long been acknowledged, and a critical step to achieve this goal is schema linking, i. e., properly recognizing mentions of unseen columns or tables when generating SQLs.

SQL Parsing Text-To-SQL

Structured Reordering for Modeling Latent Alignments in Sequence Transduction

1 code implementation NeurIPS 2021 Bailin Wang, Mirella Lapata, Ivan Titov

Despite success in many domains, neural models struggle in settings where train and test examples are drawn from different distributions.

Machine Translation Semantic Parsing +2

Learning from Executions for Semantic Parsing

1 code implementation NAACL 2021 Bailin Wang, Mirella Lapata, Ivan Titov

Based on the observation that programs which correspond to NL utterances must be always executable, we propose to encourage a parser to generate executable programs for unlabeled utterances.

Semantic Parsing

Learning to Synthesize Data for Semantic Parsing

1 code implementation NAACL 2021 Bailin Wang, Wenpeng Yin, Xi Victoria Lin, Caiming Xiong

Moreover, explicitly modeling compositions using PCFG leads to a better exploration of unseen programs, thus generate more diverse data.

Domain Generalization Semantic Parsing +3

Meta-Learning for Domain Generalization in Semantic Parsing

no code implementations NAACL 2021 Bailin Wang, Mirella Lapata, Ivan Titov

The importance of building semantic parsers which can be applied to new domains and generate programs unseen at training has long been acknowledged, and datasets testing out-of-domain performance are becoming increasingly available.

Domain Generalization Meta-Learning +1

GraPPa: Grammar-Augmented Pre-Training for Table Semantic Parsing

1 code implementation ICLR 2021 Tao Yu, Chien-Sheng Wu, Xi Victoria Lin, Bailin Wang, Yi Chern Tan, Xinyi Yang, Dragomir Radev, Richard Socher, Caiming Xiong

We present GraPPa, an effective pre-training approach for table semantic parsing that learns a compositional inductive bias in the joint representations of textual and tabular data.

Inductive Bias Language Modelling +3

RAT-SQL: Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers

4 code implementations ACL 2020 Bailin Wang, Richard Shin, Xiaodong Liu, Oleksandr Polozov, Matthew Richardson

The generalization challenge lies in (a) encoding the database relations in an accessible way for the semantic parser, and (b) modeling alignment between database columns and their mentions in a given query.

Relation Semantic Parsing +1

Learning Semantic Parsers from Denotations with Latent Structured Alignments and Abstract Programs

1 code implementation IJCNLP 2019 Bailin Wang, Ivan Titov, Mirella Lapata

Semantic parsing aims to map natural language utterances onto machine interpretable meaning representations, aka programs whose execution against a real-world environment produces a denotation.

Inductive Bias Semantic Parsing

Combining Spans into Entities: A Neural Two-Stage Approach for Recognizing Discontiguous Entities

1 code implementation IJCNLP 2019 Bailin Wang, Wei Lu

In medical documents, it is possible that an entity of interest not only contains a discontiguous sequence of words but also overlaps with another entity.

Neural Segmental Hypergraphs for Overlapping Mention Recognition

1 code implementation EMNLP 2018 Bailin Wang, Wei Lu

In this work, we propose a novel segmental hypergraph representation to model overlapping entity mentions that are prevalent in many practical datasets.

Nested Mention Recognition Nested Named Entity Recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.