no code implementations • 8 Mar 2024 • Kunal Handa, Yarin Gal, Ellie Pavlick, Noah Goodman, Jacob Andreas, Alex Tamkin, Belinda Z. Li
We introduce OPEN (Optimal Preference Elicitation with Natural language) a framework that uses BOED to guide the choice of informative questions and an LM to extract features and translate abstract BOED queries into natural language questions.
no code implementations • 28 Feb 2024 • Andi Peng, Ilia Sucholutsky, Belinda Z. Li, Theodore R. Sumers, Thomas L. Griffiths, Jacob Andreas, Julie A. Shah
We describe a framework for using natural language to design state abstractions for imitation learning.
no code implementations • 5 Feb 2024 • Andi Peng, Andreea Bobu, Belinda Z. Li, Theodore R. Sumers, Ilia Sucholutsky, Nishanth Kumar, Thomas L. Griffiths, Julie A. Shah
We observe that how humans behave reveals how they see the world.
1 code implementation • 17 Oct 2023 • Belinda Z. Li, Alex Tamkin, Noah Goodman, Jacob Andreas
Language models (LMs) can be directed to perform target tasks by using labeled examples or natural language prompts.
no code implementations • 8 Jul 2023 • Belinda Z. Li, Jason Eisner, Adam Pauls, Sam Thomson
Voice dictation is an increasingly important text input modality.
1 code implementation • 3 Apr 2023 • Evan Hernandez, Belinda Z. Li, Jacob Andreas
Neural language models (LMs) represent facts about the world described by text.
no code implementations • 3 Feb 2023 • Belinda Z. Li, William Chen, Pratyusha Sharma, Jacob Andreas
Language models trained on large text corpora encode rich distributional information about real-world environments and action sequences.
no code implementations • 20 Dec 2022 • Belinda Z. Li, Maxwell Nye, Jacob Andreas
Language models (LMs) often generate incoherent outputs: they refer to events and entity states that are incompatible with the state of the world described in their inputs.
2 code implementations • NAACL 2022 • Belinda Z. Li, Jane Yu, Madian Khabsa, Luke Zettlemoyer, Alon Halevy, Jacob Andreas
When a neural language model (LM) is adapted to perform a new task, what aspects of the task predict the eventual performance of the model?
1 code implementation • ACL 2021 • Belinda Z. Li, Maxwell Nye, Jacob Andreas
Does the effectiveness of neural language models derive entirely from accurate modeling of surface word co-occurrence statistics, or do these models represent and reason about the world they describe?
no code implementations • EMNLP 2021 • Qinyuan Ye, Belinda Z. Li, Sinong Wang, Benjamin Bolte, Hao Ma, Wen-tau Yih, Xiang Ren, Madian Khabsa
Current NLP models are predominantly trained through a two-stage "pre-train then fine-tune" pipeline.
1 code implementation • NAACL 2021 • Nayeon Lee, Belinda Z. Li, Sinong Wang, Pascale Fung, Hao Ma, Wen-tau Yih, Madian Khabsa
In this paper, we introduce UnifiedM2, a general-purpose misinformation model that jointly models multiple domains of misinformation with a single, unified setup.
no code implementations • 31 Dec 2020 • Qinyuan Ye, Belinda Z. Li, Sinong Wang, Benjamin Bolte, Hao Ma, Wen-tau Yih, Xiang Ren, Madian Khabsa
Thus, our policy packs task-relevant knowledge into the parameters of a language model.
3 code implementations • EMNLP 2020 • Belinda Z. Li, Sewon Min, Srinivasan Iyer, Yashar Mehdad, Wen-tau Yih
We present ELQ, a fast end-to-end entity linking model for questions, which uses a biencoder to jointly perform mention detection and linking in one pass.
15 code implementations • 8 Jun 2020 • Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, Hao Ma
Large transformer models have shown extraordinary success in achieving state-of-the-art results in many natural language processing applications.
no code implementations • WS 2020 • Nayeon Lee, Belinda Z. Li, Sinong Wang, Wen-tau Yih, Hao Ma, Madian Khabsa
Recent work has suggested that language models (LMs) store both common-sense and factual knowledge learned from pre-training data.
1 code implementation • ACL 2020 • Belinda Z. Li, Gabriel Stanovsky, Luke Zettlemoyer
We improve upon pairwise annotation for active learning in coreference resolution, by asking annotators to identify mention antecedents if a presented mention pair is deemed not coreferent.