no code implementations • 19 Feb 2024 • Hao Tang, Darren Key, Kevin Ellis
We give a model-based agent that builds a Python program representing its knowledge of the world based on its interactions with the environment.
no code implementations • 8 Feb 2024 • Wasu Top Piriyakulkij, Kevin Ellis
We build a computational model of how humans actively infer hidden rules by doing experiments.
no code implementations • 19 Dec 2023 • Top Piriyakulkij, Volodymyr Kuleshov, Kevin Ellis
To enable this ability for instruction-tuned large language models (LLMs), one may prompt them to ask users questions to infer their preferences, transforming the language models into more robust, interactive systems.
no code implementations • 7 Dec 2023 • Yichao Liang, Kevin Ellis, João Henriques
Drawing inspiration from RMA in locomotion and in-hand rotation, we use depth perception to develop agents tailored for rapid motor adaptation in a variety of manipulation tasks.
1 code implementation • 29 Nov 2022 • Matthew Bowers, Theo X. Olausson, Lionel Wong, Gabriel Grand, Joshua B. Tenenbaum, Kevin Ellis, Armando Solar-Lezama
This paper introduces corpus-guided top-down synthesis as a mechanism for synthesizing library functions that capture common functionality from a corpus of programs in a domain specific language (DSL).
no code implementations • 29 Sep 2022 • Darren Key, Wen-Ding Li, Kevin Ellis
We develop an approach to estimate the probability that a program sampled from a large language model is correct.
no code implementations • 13 Jun 2022 • Hao Tang, Kevin Ellis
Toward combining inductive reasoning with perception abilities, we develop techniques for neurosymbolic program synthesis where perceptual input is first parsed by neural nets into a low-dimensional interpretable representation, which is then processed by a synthesized program.
no code implementations • 5 Apr 2022 • Saujas Vaduguru, Kevin Ellis, Yewen Pu
Surprisingly, we find that the synthesizer assuming a factored approximation performs better than a synthesizer assuming an exact joint distribution when evaluated on natural human inputs.
1 code implementation • ICLR 2022 • Kensen Shi, Hanjun Dai, Kevin Ellis, Charles Sutton
Many approaches to program synthesis perform a search within an enormous space of programs to find one that satisfies a given specification.
1 code implementation • 24 Oct 2021 • Nathanaël Fijalkow, Guillaume Lagarde, Théo Matricon, Kevin Ellis, Pierre Ohlmann, Akarsh Potta
We investigate how to augment probabilistic and neural program synthesis methods with new search algorithms, proposing a framework called distribution-based search.
no code implementations • ICLR 2022 • Tuan Anh Le, Katherine M. Collins, Luke Hewitt, Kevin Ellis, N. Siddharth, Samuel J. Gershman, Joshua B. Tenenbaum
We build on a recent approach, Memoised Wake-Sleep (MWS), which alleviates part of the problem by memoising discrete variables, and extend it to allow for a principled and effective way to handle continuous variables by learning a separate recognition model used for importance-sampling based approximate inference and marginalization.
no code implementations • 18 Jun 2021 • Catherine Wong, Kevin Ellis, Joshua B. Tenenbaum, Jacob Andreas
Inductive program synthesis, or inferring programs from examples of desired behavior, offers a general paradigm for building interpretable, robust, and generalizable machine learning systems.
no code implementations • NeurIPS 2020 • Lucas Y. Tian, Kevin Ellis, Marta Kryven, Joshua B. Tenenbaum
Humans flexibly solve new problems that differ qualitatively from those they were trained on.
no code implementations • NeurIPS 2020 • Yewen Pu, Kevin Ellis, Marta Kryven, Josh Tenenbaum, Armando Solar-Lezama
Given a specification, we score a candidate program both on its consistency with the specification, and also whether a rational speaker would chose this particular specification to communicate that program.
3 code implementations • 15 Jun 2020 • Kevin Ellis, Catherine Wong, Maxwell Nye, Mathias Sable-Meyer, Luc Cary, Lucas Morales, Luke Hewitt, Armando Solar-Lezama, Joshua B. Tenenbaum
It builds expertise by creating programming languages for expressing domain concepts, together with neural networks to guide the search for programs within these languages.
no code implementations • NeurIPS 2019 • Kevin Ellis, Maxwell Nye, Yewen Pu, Felix Sosa, Josh Tenenbaum, Armando Solar-Lezama
We present a neural program synthesis approach integrating components which write, execute, and assess code to navigate the search space of possible programs.
no code implementations • ICLR 2019 • Yonglong Tian, Andrew Luo, Xingyuan Sun, Kevin Ellis, William T. Freeman, Joshua B. Tenenbaum, Jiajun Wu
Human perception of 3D shapes goes beyond reconstructing them as a set of points or a composition of geometric primitives: we also effortlessly understand higher-level shape structure such as the repetition and reflective symmetry of object parts.
no code implementations • NeurIPS 2018 • Kevin Ellis, Lucas Morales, Mathias Sablé-Meyer, Armando Solar-Lezama, Josh Tenenbaum
Successful approaches to program induction require a hand-engineered domain-specific language (DSL), constraining the space of allowed programs and imparting prior knowledge of the domain.
no code implementations • 1 Dec 2018 • Kevin Ellis, Lucas Morales, Mathias Sablé-Meyer, Armando Solar-Lezama, Joshua B. Tenenbaum
Successful approaches to program induction require a hand-engineered domain-specific language (DSL), constraining the space of allowed programs and imparting prior knowledge of the domain.
1 code implementation • ICLR 2018 • Kevin Ellis, Daniel Ritchie, Armando Solar-Lezama, Joshua B. Tenenbaum
These drawing primitives are like a trace of the set of primitive commands issued by a graphics program.
no code implementations • NeurIPS 2016 • Kevin Ellis, Armando Solar-Lezama, Josh Tenenbaum
Towards learning programs from data, we introduce the problem of sampling programs from posterior distributions conditioned on that data.
no code implementations • NeurIPS 2015 • Kevin Ellis, Armando Solar-Lezama, Josh Tenenbaum
We introduce an unsupervised learning algorithmthat combines probabilistic modeling with solver-based techniques for program synthesis. We apply our techniques to both a visual learning domain and a language learning problem, showing that our algorithm can learn many visual concepts from only a few examplesand that it can recover some English inflectional morphology. Taken together, these results give both a new approach to unsupervised learning of symbolic compositional structures, and a technique for applying program synthesis tools to noisy data.