Search Results for author: Roy H. Perlis

Found 3 papers, 0 papers with code

Preferential Mixture-of-Experts: Interpretable Models that Rely on Human Expertise as much as Possible

no code implementations13 Jan 2021 Melanie F. Pradier, Javier Zazo, Sonali Parbhoo, Roy H. Perlis, Maurizio Zazzi, Finale Doshi-Velez

We propose Preferential MoE, a novel human-ML mixture-of-experts model that augments human expertise in decision making with a data-based classifier only when necessary for predictive performance.

Decision Making Management

Prediction-Constrained Topic Models for Antidepressant Recommendation

no code implementations1 Dec 2017 Michael C. Hughes, Gabriel Hope, Leah Weiner, Thomas H. McCoy, Roy H. Perlis, Erik B. Sudderth, Finale Doshi-Velez

Supervisory signals can help topic models discover low-dimensional data representations that are more interpretable for clinical tasks.

Topic Models

Prediction-Constrained Training for Semi-Supervised Mixture and Topic Models

no code implementations23 Jul 2017 Michael C. Hughes, Leah Weiner, Gabriel Hope, Thomas H. McCoy Jr., Roy H. Perlis, Erik B. Sudderth, Finale Doshi-Velez

Supervisory signals have the potential to make low-dimensional data representations, like those learned by mixture and topic models, more interpretable and useful.

Sentiment Analysis Topic Models

Cannot find the paper you are looking for? You can Submit a new open access paper.