Search Results for author: Mark E. Nunnally

Found 3 papers, 2 papers with code

LLMs Understand Glass-Box Models, Discover Surprises, and Suggest Repairs

1 code implementation2 Aug 2023 Benjamin J. Lengerich, Sebastian Bordt, Harsha Nori, Mark E. Nunnally, Yin Aphinyanaphongs, Manolis Kellis, Rich Caruana

We show that large language models (LLMs) are remarkably good at working with interpretable models that decompose complex outcomes into univariate graph-represented components.

Additive models

Estimating Discontinuous Time-Varying Risk Factors and Treatment Benefits for COVID-19 with Interpretable ML

no code implementations15 Nov 2022 Benjamin Lengerich, Mark E. Nunnally, Yin Aphinyanaphongs, Rich Caruana

Treatment protocols, disease understanding, and viral characteristics changed over the course of the COVID-19 pandemic; as a result, the risks associated with patient comorbidities and biomarkers also changed.

Additive models

Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values

2 code implementations30 Jun 2022 Zijie J. Wang, Alex Kale, Harsha Nori, Peter Stella, Mark E. Nunnally, Duen Horng Chau, Mihaela Vorvoreanu, Jennifer Wortman Vaughan, Rich Caruana

Machine learning (ML) interpretability techniques can reveal undesirable patterns in data that models exploit to make predictions--potentially causing harms once deployed.

Additive models BIG-bench Machine Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.