Search Results for author: Sarah Tan

Found 14 papers, 6 papers with code

Error Discovery by Clustering Influence Embeddings

no code implementations NeurIPS 2023 Fulton Wang, Julius Adebayo, Sarah Tan, Diego Garcia-Olano, Narine Kokhlikyan

We present a method for identifying groups of test examples -- slices -- on which a model under-performs, a task now known as slice discovery.

Clustering

Practical Policy Optimization with Personalized Experimentation

no code implementations30 Mar 2023 Mia Garrard, Hanson Wang, Ben Letham, Shaun Singh, Abbas Kazerouni, Sarah Tan, Zehui Wang, Yin Huang, Yichun Hu, Chad Zhou, Norm Zhou, Eytan Bakshy

Many organizations measure treatment effects via an experimentation platform to evaluate the casual effect of product variations prior to full-scale deployment.

Interpretable Personalized Experimentation

no code implementations5 Nov 2021 Han Wu, Sarah Tan, Weiwei Li, Mia Garrard, Adam Obeng, Drew Dimmery, Shaun Singh, Hanson Wang, Daniel Jiang, Eytan Bakshy

Black-box heterogeneous treatment effect (HTE) models are increasingly being used to create personalized policies that assign individuals to their optimal treatments.

How Interpretable and Trustworthy are GAMs?

2 code implementations11 Jun 2020 Chun-Hao Chang, Sarah Tan, Ben Lengerich, Anna Goldenberg, Rich Caruana

Generalized additive models (GAMs) have become a leading modelclass for interpretable machine learning.

Additive models Inductive Bias +1

Purifying Interaction Effects with the Functional ANOVA: An Efficient Algorithm for Recovering Identifiable Additive Models

1 code implementation12 Nov 2019 Benjamin Lengerich, Sarah Tan, Chun-Hao Chang, Giles Hooker, Rich Caruana

Models which estimate main effects of individual variables alongside interaction effects have an identifiability challenge: effects can be freely moved between main effects and interaction effects without changing the model prediction.

Additive models

"Why Should You Trust My Explanation?" Understanding Uncertainty in LIME Explanations

no code implementations29 Apr 2019 Yu-jia Zhang, Kuangyan Song, Yiming Sun, Sarah Tan, Madeleine Udell

Methods for interpreting machine learning black-box models increase the outcomes' transparency and in turn generates insight into the reliability and fairness of the algorithms.

Fairness General Classification +2

Axiomatic Interpretability for Multiclass Additive Models

1 code implementation22 Oct 2018 Xuezhou Zhang, Sarah Tan, Paul Koch, Yin Lou, Urszula Chajewska, Rich Caruana

In the first part of this paper, we generalize a state-of-the-art GAM learning algorithm based on boosted trees to the multiclass setting, and show that this multiclass algorithm outperforms existing GAM learning algorithms and sometimes matches the performance of full complexity models such as gradient boosted trees.

Additive models Binary Classification +1

Investigating Human + Machine Complementarity for Recidivism Predictions

no code implementations28 Aug 2018 Sarah Tan, Julius Adebayo, Kori Inkpen, Ece Kamar

Dressel and Farid (2018) asked Mechanical Turk workers to evaluate a subset of defendants in the ProPublica COMPAS data for risk of recidivism, and concluded that COMPAS predictions were no more accurate or fair than predictions made by humans.

Decision Making Fairness

Considerations When Learning Additive Explanations for Black-Box Models

1 code implementation ICLR 2019 Sarah Tan, Giles Hooker, Paul Koch, Albert Gordo, Rich Caruana

In this paper, we study global additive explanations for non-additive models, focusing on four explanation methods: partial dependence, Shapley explanations adapted to a global setting, distilled additive explanations, and gradient-based explanations.

Additive models

A Double Parametric Bootstrap Test for Topic Models

no code implementations19 Nov 2017 Skyler Seto, Sarah Tan, Giles Hooker, Martin T. Wells

Non-negative matrix factorization (NMF) is a technique for finding latent representations of data.

Topic Models

Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation

1 code implementation17 Oct 2017 Sarah Tan, Rich Caruana, Giles Hooker, Yin Lou

We compare the student model trained with distillation to a second un-distilled transparent model trained on ground-truth outcomes, and use differences between the two models to gain insight into the black-box model.

Cannot find the paper you are looking for? You can Submit a new open access paper.