1 code implementation • 1 Apr 2024 • Kanishk Gandhi, Denise Lee, Gabriel Grand, Muxin Liu, Winson Cheng, Archit Sharma, Noah D. Goodman
In this paper, we show how language models can be taught to search by representing the process of search in language, as a flattened string -- a stream of search (SoS).
no code implementations • 29 Feb 2024 • Gabriel Grand, Valerio Pepe, Jacob Andreas, Joshua B. Tenenbaum
Questions combine our mastery of language with our remarkable facility for reasoning about uncertainty.
1 code implementation • 30 Oct 2023 • Gabriel Grand, Lionel Wong, Maddy Bowers, Theo X. Olausson, Muxin Liu, Joshua B. Tenenbaum, Jacob Andreas
While large language models (LLMs) now excel at code generation, a key aspect of software development is the art of refactoring: consolidating code into libraries of reusable and readable programs.
1 code implementation • 22 Jun 2023 • Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum
Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language.
2 code implementations • 5 Jun 2023 • Alexander K. Lew, Tan Zhi-Xuan, Gabriel Grand, Vikash K. Mansinghka
Even after fine-tuning and reinforcement learning, large language models (LLMs) can be difficult, if not impossible, to control reliably with prompts alone.
1 code implementation • 1 May 2023 • Benjamin Lipkin, Lionel Wong, Gabriel Grand, Joshua B Tenenbaum
These results inform the inferential capacity of statistical language models, and their use in pragmatic and semantic parsing applications.
1 code implementation • 29 Nov 2022 • Matthew Bowers, Theo X. Olausson, Lionel Wong, Gabriel Grand, Joshua B. Tenenbaum, Kevin Ellis, Armando Solar-Lezama
This paper introduces corpus-guided top-down synthesis as a mechanism for synthesizing library functions that capture common functionality from a corpus of programs in a domain specific language (DSL).
2 code implementations • 5 Sep 2022 • Walid Ahmad, Elana Simon, Seyone Chithrananda, Gabriel Grand, Bharath Ramsundar
Large pretrained models such as GPT-3 have had tremendous impact on modern natural language processing by leveraging self-supervised learning to learn salient representations that can be used to readily finetune on a wide variety of downstream tasks.
Ranked #2 on Molecular Property Prediction on Clearance
1 code implementation • 11 May 2022 • Catherine Wong, William P. McCarthy, Gabriel Grand, Yoni Friedman, Joshua B. Tenenbaum, Jacob Andreas, Robert D. Hawkins, Judith E. Fan
Our understanding of the visual world goes beyond naming objects, encompassing our ability to parse objects into meaningful parts, attributes, and relations.
3 code implementations • 19 Oct 2020 • Seyone Chithrananda, Gabriel Grand, Bharath Ramsundar
GNNs and chemical fingerprints are the predominant approaches to representing molecules for property prediction.
1 code implementation • NAACL 2019 • Gabriel Grand, Yonatan Belinkov
Visual question answering (VQA) models have been shown to over-rely on linguistic biases in VQA datasets, answering questions "blindly" without considering visual context.
no code implementations • 3 Jun 2018 • Gabriel Grand, Aron Szanto, Yoon Kim, Alexander Rush
Visual question answering (VQA) models respond to open-ended natural language questions about images.
no code implementations • 5 Feb 2018 • Gabriel Grand, Idan Asher Blank, Francisco Pereira, Evelina Fedorenko
Because related words appear in similar contexts, such spaces - called "word embeddings" - can be learned from patterns of lexical co-occurrences in natural language.