no code implementations • 2 Feb 2024 • Tuan Anh Le, Pavel Sountsov, Matthew D. Hoffman, Ben Lee, Brian Patton, Rif A. Saurous
How do we infer a 3D scene from a single image in the presence of corruptions like rain, snow or fog?
no code implementations • NeurIPS 2023 • Du Phan, Matthew D. Hoffman, David Dohan, Sholto Douglas, Tuan Anh Le, Aaron Parisi, Pavel Sountsov, Charles Sutton, Sharad Vikram, Rif A. Saurous
Large language models (LLMs) solve problems more accurately and interpretably when instructed to work out the answer step by step using a ``chain-of-thought'' (CoT) prompt.
1 code implementation • 21 Aug 2023 • Kunal Jha, Tuan Anh Le, Chuanyang Jin, Yen-Ling Kuo, Joshua B. Tenenbaum, Tianmin Shu
Multi-agent interactions, such as communication, teaching, and bluffing, often rely on higher-order social inference, i. e., understanding how others infer oneself.
no code implementations • 27 Oct 2022 • Matthew D. Hoffman, Tuan Anh Le, Pavel Sountsov, Christopher Suter, Ben Lee, Vikash K. Mansinghka, Rif A. Saurous
The problem of inferring object shape from a single 2D image is underconstrained.
no code implementations • 3 Jun 2022 • Yichao Liang, Joshua B. Tenenbaum, Tuan Anh Le, N. Siddharth
We then adopt a subset of the Omniglot challenge tasks, and evaluate its ability to generate new exemplars (both unconditionally and conditionally), and perform one-shot classification, showing that DooD matches the state of the art.
no code implementations • ICLR 2022 • Tuan Anh Le, Katherine M. Collins, Luke Hewitt, Kevin Ellis, N. Siddharth, Samuel J. Gershman, Joshua B. Tenenbaum
We build on a recent approach, Memoised Wake-Sleep (MWS), which alleviates part of the problem by memoising discrete variables, and extend it to allow for a principled and effective way to handle continuous variables by learning a separate recognition model used for importance-sampling based approximate inference and marginalization.
no code implementations • 16 Apr 2021 • Matthias Hofer, Tuan Anh Le, Roger Levy, Josh Tenenbaum
Humans have the ability to rapidly understand rich combinatorial concepts from limited data.
no code implementations • 6 Jul 2020 • Luke B. Hewitt, Tuan Anh Le, Joshua B. Tenenbaum
We study a class of neuro-symbolic generative models in which neural networks are used both for inference and as priors over symbolic, data-generating programs.
no code implementations • 30 Jun 2020 • Michael Teng, Tuan Anh Le, Adam Scibior, Frank Wood
We introduce a novel objective for training deep generative time-series models with discrete latent variables for which supervision is only sparsely available.
1 code implementation • ICML 2020 • Hao Wu, Heiko Zimmermann, Eli Sennesh, Tuan Anh Le, Jan-Willem van de Meent
We develop amortized population Gibbs (APG) samplers, a class of scalable methods that frames structured variational inference as adaptive importance sampling.
1 code implementation • NeurIPS 2019 • Vaden Masrani, Tuan Anh Le, Frank Wood
We introduce the thermodynamic variational objective (TVO) for learning in both continuous and discrete deep generative models.
no code implementations • ICLR 2019 • Tuan Anh Le, Adam R. Kosiorek, N. Siddharth, Yee Whye Teh, Frank Wood
Discrete latent-variable models, while applicable in a variety of settings, can often be difficult to learn.
no code implementations • 12 Mar 2019 • Michael Teng, Tuan Anh Le, Adam Scibior, Frank Wood
We apply recent advances in deep generative modeling to the task of imitation learning from biological agents.
1 code implementation • ICML 2018 • Maximilian Igl, Luisa Zintgraf, Tuan Anh Le, Frank Wood, Shimon Whiteson
Many real-world sequential decision making problems are partially observable by nature, and the environment model is typically unknown.
1 code implementation • ICLR 2019 • Tuan Anh Le, Adam R. Kosiorek, N. Siddharth, Yee Whye Teh, Frank Wood
Stochastic control-flow models (SCFMs) are a class of generative models that involve branching on choices from discrete random variables.
3 code implementations • ICML 2018 • Tom Rainforth, Adam R. Kosiorek, Tuan Anh Le, Chris J. Maddison, Maximilian Igl, Frank Wood, Yee Whye Teh
We provide theoretical and empirical evidence that using tighter evidence lower bounds (ELBOs) can be detrimental to the process of learning an inference network by reducing the signal-to-noise ratio of the gradient estimator.
no code implementations • 21 Dec 2017 • Mario Lezcano Casado, Atilim Gunes Baydin, David Martinez Rubio, Tuan Anh Le, Frank Wood, Lukas Heinrich, Gilles Louppe, Kyle Cranmer, Karen Ng, Wahid Bhimji, Prabhat
We consider the problem of Bayesian inference in the family of probabilistic models implicitly defined by stochastic generative models of data.
2 code implementations • NeurIPS 2016 • Tom Rainforth, Tuan Anh Le, Jan-Willem van de Meent, Michael A. Osborne, Frank Wood
We present the first general purpose framework for marginal maximum a posteriori estimation of probabilistic program variables.
1 code implementation • ICLR 2018 • Tuan Anh Le, Maximilian Igl, Tom Rainforth, Tom Jin, Frank Wood
We build on auto-encoding sequential Monte Carlo (AESMC): a method for model and proposal learning based on maximizing the lower bound to the log marginal likelihood in a broad family of structured probabilistic models.
no code implementations • 2 Mar 2017 • Tuan Anh Le, Atilim Gunes Baydin, Robert Zinkov, Frank Wood
We draw a formal connection between using synthetic training data to optimize neural network parameters and approximate, Bayesian, model-based reasoning.
no code implementations • WS 2016 • Tuan Anh Le, David Moeljadi, Yasuhide Miura, Tomoko Ohkuma
This paper describes our attempt to build a sentiment analysis system for Indonesian tweets.
4 code implementations • 31 Oct 2016 • Tuan Anh Le, Atilim Gunes Baydin, Frank Wood
We introduce a method for using deep neural networks to amortize the cost of inference in models from the family induced by universal probabilistic programming languages, establishing a framework that combines the strengths of probabilistic programming and deep learning methods.
no code implementations • 14 Dec 2015 • Yura N. Perov, Tuan Anh Le, Frank Wood
Most of Markov Chain Monte Carlo (MCMC) and sequential Monte Carlo (SMC) algorithms in existing probabilistic programming systems suboptimally use only model priors as proposal distributions.