Search Results for author: Ryan P. Adams

Found 73 papers, 34 papers with code

Generative Marginalization Models

1 code implementation19 Oct 2023 Sulin Liu, Peter J. Ramadge, Ryan P. Adams

We introduce marginalization models (MaMs), a new family of generative models for high-dimensional discrete data.

Representing and Learning Functions Invariant Under Crystallographic Groups

no code implementations8 Jun 2023 Ryan P. Adams, Peter Orbanz

The linear representation generalizes the Fourier basis to crystallographically invariant basis functions.

Gaussian Processes

Neuromechanical Autoencoders: Learning to Couple Elastic and Neural Network Nonlinearity

no code implementations31 Jan 2023 Deniz Oktay, Mehran Mirramezani, Eder Medina, Ryan P. Adams

In this work, we seek to develop machine learning analogs of this process, in which we jointly learn the morphology of complex nonlinear elastic solids along with a deep neural network to control it.

Meta-PDE: Learning to Solve PDEs Quickly Without a Mesh

no code implementations3 Nov 2022 Tian Qin, Alex Beatson, Deniz Oktay, Nick McGreivy, Ryan P. Adams

Partial differential equations (PDEs) are often computationally challenging to solve, and in many settings many related PDEs must be be solved either at every timestep or for a variety of candidate boundary conditions, parameters, or geometric domains.

Meta-Learning

Multi-fidelity Monte Carlo: a pseudo-marginal approach

no code implementations4 Oct 2022 Diana Cai, Ryan P. Adams

A key challenge in applying MCMC to scientific domains is computation: the target density of interest is often a function of expensive computations, such as a high-fidelity physical simulation, an intractable integral, or a slowly-converging iterative algorithm.

Uncertainty Quantification

ProBF: Learning Probabilistic Safety Certificates with Barrier Functions

1 code implementation22 Dec 2021 Athindran Ramesh Kumar, Sulin Liu, Jaime F. Fisac, Ryan P. Adams, Peter J. Ramadge

In practice, we have inaccurate knowledge of the system dynamics, which can lead to unsafe behaviors due to unmodeled residual dynamics.

Slice Sampling Reparameterization Gradients

no code implementations NeurIPS 2021 David Zoltowski, Diana Cai, Ryan P. Adams

Slice sampling is a Markov chain Monte Carlo algorithm for simulating samples from probability distributions; it only requires a density function that can be evaluated point-wise up to a normalization constant, making it applicable to a variety of inference problems and unnormalized models.

Vitruvion: A Generative Model of Parametric CAD Sketches

no code implementations ICLR 2022 Ari Seff, Wenda Zhou, Nick Richardson, Ryan P. Adams

Parametric computer-aided design (CAD) tools are the predominant way that engineers specify physical structures, from bicycle pedals to airplanes to printed circuit boards.

Amortized Synthesis of Constrained Configurations Using a Differentiable Surrogate

1 code implementation NeurIPS 2021 Xingyuan Sun, Tianju Xue, Szymon Rusinkiewicz, Ryan P. Adams

We compare our approach to direct optimization of the design using the learned surrogate, and to supervised learning of the synthesis problem.

Physical Simulations

Active multi-fidelity Bayesian online changepoint detection

1 code implementation26 Mar 2021 Gregory W. Gundersen, Diana Cai, Chuteng Zhou, Barbara E. Engelhardt, Ryan P. Adams

We propose a multi-fidelity approach that makes cost-sensitive decisions about which data fidelity to collect based on maximizing information gain with respect to changepoints.

Edge-computing Time Series +1

Task-Agnostic Amortized Inference of Gaussian Process Hyperparameters

1 code implementation NeurIPS 2020 Sulin Liu, Xingyuan Sun, Peter J. Ramadge, Ryan P. Adams

One of the appeals of the GP framework is that the marginal likelihood of the kernel hyperparameters is often available in closed form, enabling optimization and sampling procedures to fit these hyperparameters to data.

Bayesian Optimization Gaussian Processes +2

Randomized Automatic Differentiation

1 code implementation ICLR 2021 Deniz Oktay, Nick McGreivy, Joshua Aduol, Alex Beatson, Ryan P. Adams

The successes of deep learning, variational inference, and many other fields have been aided by specialized implementations of reverse-mode automatic differentiation (AD) to compute gradients of mega-dimensional objectives.

Stochastic Optimization Variational Inference

SketchGraphs: A Large-Scale Dataset for Modeling Relational Geometry in Computer-Aided Design

1 code implementation16 Jul 2020 Ari Seff, Yaniv Ovadia, Wenda Zhou, Ryan P. Adams

Parametric computer-aided design (CAD) is the dominant paradigm in mechanical engineering for physical design.

Program Synthesis

SUMO: Unbiased Estimation of Log Marginal Probability for Latent Variable Models

no code implementations ICLR 2020 Yucen Luo, Alex Beatson, Mohammad Norouzi, Jun Zhu, David Duvenaud, Ryan P. Adams, Ricky T. Q. Chen

Standard variational lower bounds used to train latent variable models produce biased estimates of most quantities of interest.

On Warm-Starting Neural Network Training

1 code implementation NeurIPS 2020 Jordan T. Ash, Ryan P. Adams

We would like each of these models in the sequence to be performant and take advantage of all the data that are available to that point.

Experimental Design

On The Difficulty of Warm-Starting Neural Network Training

no code implementations25 Sep 2019 Jordan T. Ash, Ryan P. Adams

We would like each of these models in the sequence to be performant and take advantage of all the data that are available to that point.

Experimental Design

Discrete Object Generation with Reversible Inductive Construction

1 code implementation NeurIPS 2019 Ari Seff, Wenda Zhou, Farhan Damani, Abigail Doyle, Ryan P. Adams

The success of generative modeling in continuous domains has led to a surge of interest in generating discrete data such as molecules, source code, and graphs.

Denoising Object +1

A Theoretical Connection Between Statistical Physics and Reinforcement Learning

no code implementations24 Jun 2019 Jad Rahme, Ryan P. Adams

The central object in the statistical physics abstraction is the idea of a partition function $\mathcal{Z}$, and here we construct a partition function from the ensemble of possible trajectories that an agent might take in a Markov decision process.

Decision Making reinforcement-learning +1

SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained Microcontrollers

no code implementations NeurIPS 2019 Igor Fedorov, Ryan P. Adams, Matthew Mattina, Paul N. Whatmough

The vast majority of processors in the world are actually microcontroller units (MCUs), which find widespread use performing simple control tasks in applications ranging from automobiles to medical devices and office equipment.

BIG-bench Machine Learning Neural Architecture Search

Efficient Optimization of Loops and Limits with Randomized Telescoping Sums

1 code implementation16 May 2019 Alex Beatson, Ryan P. Adams

We consider optimization problems in which the objective requires an inner loop with many steps or is the limit of a sequence of increasingly costly approximations.

Meta-Learning Variational Inference

A Bayesian Nonparametric View on Count-Min Sketch

no code implementations NeurIPS 2018 Diana Cai, Michael Mitzenmacher, Ryan P. Adams

The count-min sketch is a time- and memory-efficient randomized data structure that provides a point estimate of the number of times an item has appeared in a data stream.

Rapid Prediction of Electron-Ionization Mass Spectrometry using Neural Networks

no code implementations21 Nov 2018 Jennifer N. Wei, David Belanger, Ryan P. Adams, D. Sculley

When confronted with a substance of unknown identity, researchers often perform mass spectrometry on the sample and compare the observed spectrum to a library of previously-collected spectra to identify the molecule.

BIG-bench Machine Learning

Motivating the Rules of the Game for Adversarial Example Research

no code implementations18 Jul 2018 Justin Gilmer, Ryan P. Adams, Ian Goodfellow, David Andersen, George E. Dahl

Advances in machine learning have led to broad deployment of systems with impressive performance on important problems.

Approximate Inference for Constructing Astronomical Catalogs from Images

1 code implementation28 Feb 2018 Jeffrey Regier, Andrew C. Miller, David Schlegel, Ryan P. Adams, Jon D. McAuliffe, Prabhat

We present a new, fully generative model for constructing astronomical catalogs from optical telescope image sets.

Variational Inference

Estimating the Spectral Density of Large Implicit Matrices

no code implementations9 Feb 2018 Ryan P. Adams, Jeffrey Pennington, Matthew J. Johnson, Jamie Smith, Yaniv Ovadia, Brian Patton, James Saunderson

However, naive eigenvalue estimation is computationally expensive even when the matrix can be represented; in many of these situations the matrix is so large as to only be available implicitly via products with vectors.

PASS-GLM: polynomial approximate sufficient statistics for scalable Bayesian GLM inference

1 code implementation NeurIPS 2017 Jonathan H. Huggins, Ryan P. Adams, Tamara Broderick

We provide theoretical guarantees on the quality of point (MAP) estimates, the approximate posterior, and posterior mean and uncertainty estimates.

regression

Multimodal Prediction and Personalization of Photo Edits with Deep Generative Models

no code implementations17 Apr 2017 Ardavan Saeedi, Matthew D. Hoffman, Stephen J. DiVerdi, Asma Ghandeharioun, Matthew J. Johnson, Ryan P. Adams

Professional-grade software applications are powerful but complicated$-$expert users can achieve impressive results, but novices often struggle to complete even basic tasks.

Variational Boosting: Iteratively Refining Posterior Approximations

1 code implementation ICML 2017 Andrew C. Miller, Nicholas Foti, Ryan P. Adams

We propose a black-box variational inference method to approximate intractable distributions with an increasingly rich approximating class.

Variational Inference

Bayesian latent structure discovery from multi-neuron recordings

2 code implementations NeurIPS 2016 Scott W. Linderman, Ryan P. Adams, Jonathan W. Pillow

Neural circuits contain heterogeneous groups of neurons that differ in type, location, connectivity, and basic response properties.

Bayesian Inference Clustering +1

Recurrent switching linear dynamical systems

1 code implementation26 Oct 2016 Scott W. Linderman, Andrew C. Miller, Ryan P. Adams, David M. Blei, Liam Paninski, Matthew J. Johnson

Many natural systems, such as neurons firing in the brain or basketball teams traversing a court, give rise to time series data with complex, nonlinear dynamics.

Bayesian Inference Time Series +1

Clustering with a Reject Option: Interactive Clustering as Bayesian Prior Elicitation

no code implementations19 Jun 2016 Akash Srivastava, James Zou, Ryan P. Adams, Charles Sutton

A good clustering can help a data analyst to explore and understand a data set, but what constitutes a good clustering may depend on domain-specific and application-specific criteria.

Clustering

Composing graphical models with neural networks for structured representations and fast inference

3 code implementations NeurIPS 2016 Matthew J. Johnson, David Duvenaud, Alexander B. Wiltschko, Sandeep R. Datta, Ryan P. Adams

We propose a general modeling and inference framework that composes probabilistic graphical models with deep learning methods and combines their respective strengths.

Variational Inference

Patterns of Scalable Bayesian Inference

no code implementations16 Feb 2016 Elaine Angelino, Matthew James Johnson, Ryan P. Adams

Bayesian methods are an excellent fit for this demand, but scaling Bayesian inference is a challenge.

Bayesian Inference

A Gaussian Process Model of Quasar Spectral Energy Distributions

no code implementations NeurIPS 2015 Andrew Miller, Albert Wu, Jeff Regier, Jon McAuliffe, Dustin Lang, Mr. Prabhat, David Schlegel, Ryan P. Adams

We propose a method for combining two sources of astronomical data, spectroscopy and photometry, that carry information about sources of light (e. g., stars, galaxies, and quasars) at extremely different spectral resolutions.

Dependent Multinomial Models Made Easy: Stick-Breaking with the Polya-gamma Augmentation

no code implementations NeurIPS 2015 Scott Linderman, Matthew Johnson, Ryan P. Adams

For example, nucleotides in a DNA sequence, children's names in a given state and year, and text documents are all commonly modeled with multinomial distributions.

Bayesian Inference

A General Framework for Constrained Bayesian Optimization using Information-based Search

1 code implementation30 Nov 2015 José Miguel Hernández-Lobato, Michael A. Gelbart, Ryan P. Adams, Matthew W. Hoffman, Zoubin Ghahramani

Of particular interest to us is to efficiently solve problems with decoupled constraints, in which subsets of the objective and constraint functions may be evaluated independently.

Bayesian Optimization

Predictive Entropy Search for Multi-objective Bayesian Optimization

no code implementations17 Nov 2015 Daniel Hernández-Lobato, José Miguel Hernández-Lobato, Amar Shah, Ryan P. Adams

The results show that PESMO produces better recommendations with a smaller number of evaluations of the objectives, and that a decoupled evaluation can lead to improvements in performance, particularly when the number of objectives is large.

Bayesian Optimization

Sandwiching the marginal likelihood using bidirectional Monte Carlo

no code implementations8 Nov 2015 Roger B. Grosse, Zoubin Ghahramani, Ryan P. Adams

Using the ground truth log-ML estimates obtained from our method, we quantitatively evaluate a wide variety of existing ML estimators on several latent variable models: clustering, a low rank approximation, and a binary attributes model.

Clustering

Scalable Bayesian Inference for Excitatory Point Process Networks

1 code implementation12 Jul 2015 Scott W. Linderman, Ryan P. Adams

We build on previous work that has taken a Bayesian approach to this problem, specifying prior distributions over the latent network structure and a likelihood of observed activity given this network.

Bayesian Inference Variational Inference

Dependent Multinomial Models Made Easy: Stick Breaking with the Pólya-Gamma Augmentation

1 code implementation18 Jun 2015 Scott W. Linderman, Matthew J. Johnson, Ryan P. Adams

Many practical modeling problems involve discrete data that are best represented as draws from multinomial or categorical distributions.

Bayesian Inference Position

Spectral Representations for Convolutional Neural Networks

no code implementations NeurIPS 2015 Oren Rippel, Jasper Snoek, Ryan P. Adams

In this work, we demonstrate that, beyond its advantages for efficient computation, the spectral domain also provides a powerful representation in which to model and train convolutional neural networks (CNNs).

Dimensionality Reduction

Early Stopping is Nonparametric Variational Inference

1 code implementation6 Apr 2015 Dougal Maclaurin, David Duvenaud, Ryan P. Adams

By tracking the change in entropy over this sequence of transformations during optimization, we form a scalable, unbiased estimate of the variational lower bound on the log marginal likelihood.

Variational Inference

Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks

3 code implementations18 Feb 2015 José Miguel Hernández-Lobato, Ryan P. Adams

In principle, the Bayesian approach to learning neural networks does not have these problems.

Gradient-based Hyperparameter Optimization through Reversible Learning

2 code implementations11 Feb 2015 Dougal Maclaurin, David Duvenaud, Ryan P. Adams

Tuning hyperparameters of learning algorithms is hard because gradients are usually unavailable.

Hyperparameter Optimization

Accelerating MCMC via Parallel Predictive Prefetching

no code implementations28 Mar 2014 Elaine Angelino, Eddie Kohler, Amos Waterland, Margo Seltzer, Ryan P. Adams

We present a general framework for accelerating a large class of widely used Markov chain Monte Carlo (MCMC) algorithms.

Bayesian Inference

Firefly Monte Carlo: Exact MCMC with Subsets of Data

no code implementations22 Mar 2014 Dougal Maclaurin, Ryan P. Adams

Markov chain Monte Carlo (MCMC) is a popular and successful general-purpose tool for Bayesian inference.

Bayesian Inference

Bayesian Optimization with Unknown Constraints

1 code implementation22 Mar 2014 Michael A. Gelbart, Jasper Snoek, Ryan P. Adams

Recent work on Bayesian optimization has shown its effectiveness in global optimization of difficult black-box objective functions.

Bayesian Optimization

Avoiding pathologies in very deep networks

2 code implementations24 Feb 2014 David Duvenaud, Oren Rippel, Ryan P. Adams, Zoubin Ghahramani

Choosing appropriate architectures and regularization strategies for deep networks is crucial to good predictive performance.

Gaussian Processes

Learning the Parameters of Determinantal Point Process Kernels

no code implementations20 Feb 2014 Raja Hafiz Affandi, Emily B. Fox, Ryan P. Adams, Ben Taskar

Determinantal point processes (DPPs) are well-suited for modeling repulsion and have proven useful in many applications where diversity is desired.

Point Processes

Input Warping for Bayesian Optimization of Non-stationary Functions

1 code implementation5 Feb 2014 Jasper Snoek, Kevin Swersky, Richard S. Zemel, Ryan P. Adams

Bayesian optimization has proven to be a highly effective methodology for the global optimization of unknown, expensive and multimodal functions.

Bayesian Optimization Gaussian Processes

Learning Ordered Representations with Nested Dropout

1 code implementation5 Feb 2014 Oren Rippel, Michael A. Gelbart, Ryan P. Adams

To learn these representations we introduce nested dropout, a procedure for stochastically removing coherent nested sets of hidden units in a neural network.

Retrieval

Discovering Latent Network Structure in Point Process Data

no code implementations4 Feb 2014 Scott W. Linderman, Ryan P. Adams

Networks play a central role in modern data analysis, enabling us to reason about systems by studying the relationships between their parts.

Point Processes

Message Passing Inference with Chemical Reaction Networks

no code implementations NeurIPS 2013 Nils E. Napp, Ryan P. Adams

We show algebraically that the steady state concentration of these species correspond to the marginal distributions of the random variables in the graph and validate the results in simulations.

A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data

no code implementations NeurIPS 2013 Jasper Snoek, Richard Zemel, Ryan P. Adams

Point processes are popular models of neural spiking behavior as they provide a statistical distribution over temporal sequences of spikes and help to reveal the complexities underlying a series of recorded action potentials.

Hippocampus Point Processes +1

Multi-Task Bayesian Optimization

1 code implementation NeurIPS 2013 Kevin Swersky, Jasper Snoek, Ryan P. Adams

We demonstrate the utility of this new acquisition function by utilizing a small dataset in order to explore hyperparameter settings for a large dataset.

Bayesian Optimization Gaussian Processes +1

Contrastive Learning Using Spectral Methods

no code implementations NeurIPS 2013 James Y. Zou, Daniel J. Hsu, David C. Parkes, Ryan P. Adams

In many natural settings, the analysis goal is not to characterize a single data set in isolation, but rather to understand the difference between one set of observations and another.

Contrastive Learning

ClusterCluster: Parallel Markov Chain Monte Carlo for Dirichlet Process Mixtures

no code implementations8 Apr 2013 Dan Lovell, Jonathan Malmaud, Ryan P. Adams, Vikash K. Mansinghka

Applied to mixture modeling, our approach enables the Dirichlet process to simultaneously learn clusters that describe the data and superclusters that define the granularity of parallelization.

Density Estimation Time Series +1

Priors for Diversity in Generative Latent Variable Models

no code implementations NeurIPS 2012 James T. Kwok, Ryan P. Adams

We show how to perform MAP inference with DPP priors in latent Dirichlet allocation and in mixture models, leading to better intuition for the latent variable representation and quantitatively improved unsupervised feature extraction, without compromising the generative aspects of the model.

Cardinality Restricted Boltzmann Machines

no code implementations NeurIPS 2012 Kevin Swersky, Ilya Sutskever, Daniel Tarlow, Richard S. Zemel, Ruslan R. Salakhutdinov, Ryan P. Adams

The Restricted Boltzmann Machine (RBM) is a popular density model that is also good for extracting features.

Parallel MCMC with Generalized Elliptical Slice Sampling

no code implementations28 Oct 2012 Robert Nishihara, Iain Murray, Ryan P. Adams

Probabilistic models are conceptually powerful tools for finding structure in data, but their practical effectiveness is often limited by our ability to perform inference in them.

Practical Bayesian Optimization of Machine Learning Algorithms

4 code implementations NeurIPS 2012 Jasper Snoek, Hugo Larochelle, Ryan P. Adams

In this work, we consider the automatic tuning problem within the framework of Bayesian optimization, in which a learning algorithm's generalization performance is modeled as a sample from a Gaussian process (GP).

Bayesian Optimization BIG-bench Machine Learning +1

Slice sampling covariance hyperparameters of latent Gaussian models

no code implementations NeurIPS 2010 Iain Murray, Ryan P. Adams

The Gaussian process (GP) is a popular way to specify dependencies between random variables in a probabilistic model.

The Gaussian Process Density Sampler

no code implementations NeurIPS 2008 Iain Murray, David Mackay, Ryan P. Adams

Samples drawn from the GPDS are consistent with exact, independent samples from a fixed density function that is a transformation of a function drawn from a Gaussian process prior.

Density Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.