1 code implementation • 19 Oct 2023 • Sulin Liu, Peter J. Ramadge, Ryan P. Adams
We introduce marginalization models (MaMs), a new family of generative models for high-dimensional discrete data.
1 code implementation • 3 Mar 2022 • Sulin Liu, Qing Feng, David Eriksson, Benjamin Letham, Eytan Bakshy
Bayesian optimization (BO) is a powerful approach to sample-efficient optimization of black-box objective functions.
1 code implementation • 22 Dec 2021 • Athindran Ramesh Kumar, Sulin Liu, Jaime F. Fisac, Ryan P. Adams, Peter J. Ramadge
In practice, we have inaccurate knowledge of the system dynamics, which can lead to unsafe behaviors due to unmodeled residual dynamics.
1 code implementation • NeurIPS 2020 • Sulin Liu, Xingyuan Sun, Peter J. Ramadge, Ryan P. Adams
One of the appeals of the GP framework is that the marginal likelihood of the kernel hyperparameters is often available in closed form, enabling optimization and sampling procedures to fit these hyperparameters to data.
no code implementations • 27 Feb 2020 • Hossein Valavi, Sulin Liu, Peter J. Ramadge
We show that, in contrast to the general situation, the minimum eigenvalue of strict saddles in $\mathcal{M}_{0}$ is uniformly bounded below zero.
no code implementations • 13 Dec 2016 • Sulin Liu, Sinno Jialin Pan, Qirong Ho
Due to heavy communication caused by transmitting the data and the issue of data privacy and security, it is impossible to send data of different task to a master machine to perform multi-task learning.