no code implementations • 20 Mar 2024 • Yifan Chen, Mark Goldstein, Mengjian Hua, Michael S. Albergo, Nicholas M. Boffi, Eric Vanden-Eijnden
We propose a framework for probabilistic forecasting of dynamical systems based on generative modeling.
1 code implementation • 16 Jan 2024 • Nanye Ma, Mark Goldstein, Michael S. Albergo, Nicholas M. Boffi, Eric Vanden-Eijnden, Saining Xie
We present Scalable Interpolant Transformers (SiT), a family of generative models built on the backbone of Diffusion Transformers (DiT).
no code implementations • 5 Oct 2023 • Michael S. Albergo, Mark Goldstein, Nicholas M. Boffi, Rajesh Ranganath, Eric Vanden-Eijnden
In this work, using the framework of stochastic interpolants, we formalize how to \textit{couple} the base and the target densities, whereby samples from the base are computed conditionally given samples from the target in a way that is different from (but does preclude) incorporating information about class labels or continuous embeddings.
no code implementations • 22 Mar 2023 • Yuxuan Hu, Albert Lui, Mark Goldstein, Mukund Sudarshan, Andrea Tinsay, Cindy Tsui, Samuel Maidman, John Medamana, Neil Jethani, Aahlad Puli, Vuthy Nguy, Yindalon Aphinyanaphongs, Nicholas Kiefer, Nathaniel Smilowitz, James Horowitz, Tania Ahuja, Glenn I Fishman, Judith Hochman, Stuart Katz, Samuel Bernard, Rajesh Ranganath
We developed a deep learning-based risk stratification tool, called CShock, for patients admitted into the cardiac ICU with acute decompensated heart failure and/or myocardial infarction to predict onset of cardiogenic shock.
no code implementations • 14 Feb 2023 • Raghav Singhal, Mark Goldstein, Rajesh Ranganath
For example, extending the inference process with auxiliary variables leads to improved sample quality.
1 code implementation • 23 Aug 2022 • Xintian Han, Mark Goldstein, Rajesh Ranganath
Survival MDN applies an invertible positive function to the output of Mixture Density Networks (MDNs).
1 code implementation • 1 Dec 2021 • Mark Goldstein, Jörn-Henrik Jacobsen, Olina Chau, Adriel Saporta, Aahlad Puli, Rajesh Ranganath, Andrew C. Miller
Enforcing such independencies requires nuisances to be observed during training.
1 code implementation • NeurIPS 2021 • Xintian Han, Mark Goldstein, Aahlad Puli, Thomas Wies, Adler J Perotte, Rajesh Ranganath
When the loss is proper, we show that the games always have the true failure and censoring distributions as a stationary point.
no code implementations • 14 Jul 2021 • Lily H. Zhang, Mark Goldstein, Rajesh Ranganath
Deep generative models (DGMs) seem a natural fit for detecting out-of-distribution (OOD) inputs, but such models have been shown to assign higher probabilities or densities to OOD images than images from the training distribution.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • NeurIPS 2020 • Mark Goldstein, Xintian Han, Aahlad Puli, Adler J. Perotte, Rajesh Ranganath
A survival model's calibration can be measured using, for instance, distributional calibration (D-CALIBRATION) [Haider et al., 2020] which computes the squared difference between the observed and predicted number of events within different time intervals.