no code implementations • 6 Dec 2023 • Joel Stremmel, Ardavan Saeedi, Hamid Hassanzadeh, Sanjit Batra, Jeffrey Hertzberg, Jaime Murillo, Eran Halperin
Our method uses the idea of a classification model explainer to generate questions and answers about medical concepts corresponding to medical codes.
no code implementations • 16 Nov 2023 • Anna Wong, Shu Ge, Nassim Oufattole, Adam Dejl, Megan Su, Ardavan Saeedi, Li-wei H. Lehman
In this work, we use knowledge distillation via constrained variational inference to distill the knowledge of a powerful "teacher" neural network model with high predictive power to train a "student" latent variable model to learn interpretable hidden state representations to achieve high predictive performance for sepsis outcome prediction.
1 code implementation • 14 Nov 2022 • Adam Dejl, Harsh Deep, Jonathan Fei, Ardavan Saeedi, Li-wei H. Lehman
Models developed using our framework benefit from the full range of RSPN capabilities, including the abilities to model the full distribution of the data, to seamlessly handle latent variables, missing values and categorical data, and to efficiently perform marginal and conditional inference.
no code implementations • ICLR 2020 • Igor Lovchinsky, Alon Daks, Israel Malkin, Pouya Samangouei, Ardavan Saeedi, Yang Liu, Swami Sankaranarayanan, Tomer Gafner, Ben Sternlieb, Patrick Maher, Nathan Silberman
In most machine learning tasks unambiguous ground truth labels can easily be acquired.
1 code implementation • CVPR 2019 • Ryutaro Tanno, Ardavan Saeedi, Swami Sankaranarayanan, Daniel C. Alexander, Nathan Silberman
We provide a theoretical argument as to how the regularization is essential to our approach both for the case of single annotator and multiple annotators.
no code implementations • ECCV 2018 • Pouya Samangouei, Ardavan Saeedi, Liam Nakagawa, Nathan Silberman
We introduce a new method for interpreting computer vision models: visually perceptible, decision-boundary crossing transformations.
no code implementations • 17 Apr 2017 • Ardavan Saeedi, Matthew D. Hoffman, Stephen J. DiVerdi, Asma Ghandeharioun, Matthew J. Johnson, Ryan P. Adams
Professional-grade software applications are powerful but complicated$-$expert users can achieve impressive results, but novices often struggle to complete even basic tasks.
1 code implementation • 8 Jun 2016 • Tejas D. Kulkarni, Ardavan Saeedi, Simanta Gautam, Samuel J. Gershman
The successor map represents the expected future state occupancy from any given state and the reward predictor maps states to scalar rewards.
4 code implementations • NeurIPS 2016 • Tejas D. Kulkarni, Karthik R. Narasimhan, Ardavan Saeedi, Joshua B. Tenenbaum
Learning goal-directed behavior in environments with sparse feedback is a major challenge for reinforcement learning algorithms.
1 code implementation • ACL 2016 • Kayhan Batmanghelich, Ardavan Saeedi, Karthik Narasimhan, Sam Gershman
In this paper, we propose to use the von Mises-Fisher distribution to model the density of words over a unit sphere.
no code implementations • 20 Feb 2016 • Ardavan Saeedi, Matthew Hoffman, Matthew Johnson, Ryan Adams
We propose the segmented iHMM (siHMM), a hierarchical infinite hidden Markov model (iHMM) that supports a simple, efficient inference scheme.
no code implementations • 31 May 2015 • Ardavan Saeedi, Vlad Firoiu, Vikash Mansinghka
Models of complex systems are often formalized as sequential software simulators: computationally intensive programs that iteratively build up probable system configurations given parameters and initial conditions.
no code implementations • 1 Mar 2015 • Jonathan H. Huggins, Karthik Narasimhan, Ardavan Saeedi, Vikash K. Mansinghka
We derive the small-variance asymptotics for parametric and nonparametric MJPs for both directly observed and hidden state models.
no code implementations • 31 Dec 2014 • Jonathan H. Huggins, Ardavan Saeedi, Matthew J. Johnson
In this note we provide detailed derivations of two versions of small-variance asymptotics for hierarchical Dirichlet process (HDP) mixture models and the HDP hidden Markov model (HDP-HMM, a. k. a.
no code implementations • 24 Feb 2014 • Ardavan Saeedi, Tejas D. Kulkarni, Vikash Mansinghka, Samuel Gershman
Like Monte Carlo, DPVI can handle multiple modes, and yields exact results in a well-defined limit.