no code implementations • 20 May 2021 • Roy Frostig, Matthew J. Johnson, Dougal Maclaurin, Adam Paszke, Alexey Radul
We decompose reverse-mode automatic differentiation into (forward-mode) linearization followed by transposition.
2 code implementations • NeurIPS 2018 • Matthew D. Hoffman, Matthew J. Johnson, Dustin Tran
Deriving conditional and marginal distributions using conjugacy relationships can be time consuming and error prone.
no code implementations • 16 Oct 2018 • Sharad Vikram, Matthew D. Hoffman, Matthew J. Johnson
In variational autoencoders, the prior on the latent codes $z$ is often treated as an afterthought, but the prior shapes the kind of latent representation that the model learns.
1 code implementation • ICLR 2019 • Marvin Zhang, Sharad Vikram, Laura Smith, Pieter Abbeel, Matthew J. Johnson, Sergey Levine
Model-based reinforcement learning (RL) has proven to be a data efficient approach for learning control tasks but is difficult to utilize in domains with complex observations such as images.
Model-based Reinforcement Learning reinforcement-learning +1
no code implementations • 9 Feb 2018 • Ryan P. Adams, Jeffrey Pennington, Matthew J. Johnson, Jamie Smith, Yaniv Ovadia, Brian Patton, James Saunderson
However, naive eigenvalue estimation is computationally expensive even when the matrix can be represented; in many of these situations the matrix is so large as to only be available implicitly via products with vectors.
no code implementations • 17 Apr 2017 • Ardavan Saeedi, Matthew D. Hoffman, Stephen J. DiVerdi, Asma Ghandeharioun, Matthew J. Johnson, Ryan P. Adams
Professional-grade software applications are powerful but complicated$-$expert users can achieve impressive results, but novices often struggle to complete even basic tasks.
1 code implementation • 26 Oct 2016 • Scott W. Linderman, Andrew C. Miller, Ryan P. Adams, David M. Blei, Liam Paninski, Matthew J. Johnson
Many natural systems, such as neurons firing in the brain or basketball teams traversing a court, give rise to time series data with complex, nonlinear dynamics.
3 code implementations • NeurIPS 2016 • Matthew J. Johnson, David Duvenaud, Alexander B. Wiltschko, Sandeep R. Datta, Ryan P. Adams
We propose a general modeling and inference framework that composes probabilistic graphical models with deep learning methods and combines their respective strengths.
1 code implementation • 18 Jun 2015 • Scott W. Linderman, Matthew J. Johnson, Ryan P. Adams
Many practical modeling problems involve discrete data that are best represented as draws from multinomial or categorical distributions.
no code implementations • 31 Dec 2014 • Jonathan H. Huggins, Ardavan Saeedi, Matthew J. Johnson
In this note we provide detailed derivations of two versions of small-variance asymptotics for hierarchical Dirichlet process (HDP) mixture models and the HDP hidden Markov model (HDP-HMM, a. k. a.
no code implementations • 27 Nov 2014 • Scott W. Linderman, Matthew J. Johnson, Matthew A. Wilson, Zhe Chen
Rodent hippocampal population codes represent important spatial information about the environment during navigation.