Search Results for author: Isaac Lage

Found 8 papers, 2 papers with code

(When) Are Contrastive Explanations of Reinforcement Learning Helpful?

no code implementations14 Nov 2022 Sanjana Narayanan, Isaac Lage, Finale Doshi-Velez

We find that complete explanations are generally more effective when they are the same size or smaller than a contrastive explanation of the same policy, and no worse when they are larger.

reinforcement-learning Reinforcement Learning (RL)

Promises and Pitfalls of Black-Box Concept Learning Models

1 code implementation24 Jun 2021 Anita Mahinpei, Justin Clark, Isaac Lage, Finale Doshi-Velez, Weiwei Pan

Machine learning models that incorporate concept learning as an intermediate step in their decision making process can match the performance of black-box predictive models while retaining the ability to explain outcomes in human understandable terms.

Decision Making

When Does Uncertainty Matter?: Understanding the Impact of Predictive Uncertainty in ML Assisted Decision Making

no code implementations12 Nov 2020 Sean McGrath, Parth Mehta, Alexandra Zytek, Isaac Lage, Himabindu Lakkaraju

As machine learning (ML) models are increasingly being employed to assist human decision makers, it becomes critical to provide these decision makers with relevant inputs which can help them decide if and how to incorporate model predictions into their decision making.

Decision Making

Exploring Computational User Models for Agent Policy Summarization

1 code implementation30 May 2019 Isaac Lage, Daphna Lifschitz, Finale Doshi-Velez, Ofra Amir

We introduce an imitation learning-based approach to policy summarization; we demonstrate through computational simulations that a mismatch between the model used to extract a summary and the model used to reconstruct the policy results in worse reconstruction quality; and we demonstrate through a human-subject study that people use different models to reconstruct policies in different contexts, and that matching the summary extraction model to these can improve performance.

Imitation Learning

An Evaluation of the Human-Interpretability of Explanation

no code implementations31 Jan 2019 Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Sam Gershman, Finale Doshi-Velez

Recent years have seen a boom in interest in machine learning systems that can provide a human-understandable rationale for their predictions or decisions.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.