Search Results for author: Lionel Wong

Found 8 papers, 5 papers with code

Grounding Language about Belief in a Bayesian Theory-of-Mind

no code implementations16 Feb 2024 Lance Ying, Tan Zhi-Xuan, Lionel Wong, Vikash Mansinghka, Joshua Tenenbaum

In this paper, we take a step towards an answer by grounding the semantics of belief statements in a Bayesian theory-of-mind: By modeling how humans jointly infer coherent sets of goals, beliefs, and plans that explain an agent's actions, then evaluating statements about the agent's beliefs against these inferences via epistemic logic, our framework provides a conceptual role semantics for belief, explaining the gradedness and compositionality of human belief attributions, as well as their intimate connection with goals and plans.

Attribute

Learning adaptive planning representations with natural language guidance

no code implementations13 Dec 2023 Lionel Wong, Jiayuan Mao, Pratyusha Sharma, Zachary S. Siegel, Jiahai Feng, Noa Korneev, Joshua B. Tenenbaum, Jacob Andreas

Effective planning in the real world requires not only world knowledge, but the ability to leverage that knowledge to build the right representation of the task at hand.

Decision Making World Knowledge

LILO: Learning Interpretable Libraries by Compressing and Documenting Code

1 code implementation30 Oct 2023 Gabriel Grand, Lionel Wong, Maddy Bowers, Theo X. Olausson, Muxin Liu, Joshua B. Tenenbaum, Jacob Andreas

While large language models (LLMs) now excel at code generation, a key aspect of software development is the art of refactoring: consolidating code into libraries of reusable and readable programs.

Code Generation Program Synthesis

From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought

1 code implementation22 Jun 2023 Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum

Our architecture integrates two computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language.

Probabilistic Programming Relational Reasoning

Evaluating statistical language models as pragmatic reasoners

1 code implementation1 May 2023 Benjamin Lipkin, Lionel Wong, Gabriel Grand, Joshua B Tenenbaum

These results inform the inferential capacity of statistical language models, and their use in pragmatic and semantic parsing applications.

Negation Semantic Parsing

Top-Down Synthesis for Library Learning

1 code implementation29 Nov 2022 Matthew Bowers, Theo X. Olausson, Lionel Wong, Gabriel Grand, Joshua B. Tenenbaum, Kevin Ellis, Armando Solar-Lezama

This paper introduces corpus-guided top-down synthesis as a mechanism for synthesizing library functions that capture common functionality from a corpus of programs in a domain specific language (DSL).

Cannot find the paper you are looking for? You can Submit a new open access paper.