Search Results for author: Liz Sonenberg

Found 11 papers, 1 papers with code

Towards the New XAI: A Hypothesis-Driven Approach to Decision Support Using Evidence

no code implementations2 Feb 2024 Thao Le, Tim Miller, Liz Sonenberg, Ronal Singh

Prior research on AI-assisted human decision-making has explored several different explainable AI (XAI) approaches.

Decision Making

Explaining Model Confidence Using Counterfactuals

no code implementations10 Mar 2023 Thao Le, Tim Miller, Ronal Singh, Liz Sonenberg

In this paper, we show that counterfactual explanations of confidence scores help study participants to better understand and better trust a machine learning model's prediction.

counterfactual Counterfactual Explanation

Improving Model Understanding and Trust with Counterfactual Explanations of Model Confidence

no code implementations6 Jun 2022 Thao Le, Tim Miller, Ronal Singh, Liz Sonenberg

In this paper, we show that counterfactual explanations of confidence scores help users better understand and better trust an AI model's prediction in human-subject studies.

counterfactual Counterfactual Explanation

Efficient Multi-agent Epistemic Planning: Teaching Planners About Nested Belief

no code implementations6 Oct 2021 Christian Muise, Vaishak Belle, Paolo Felli, Sheila Mcilraith, Tim Miller, Adrian R. Pearce, Liz Sonenberg

Many AI applications involve the interaction of multiple autonomous agents, requiring those agents to reason about their own beliefs, as well as those of other agents.

Directive Explanations for Actionable Explainability in Machine Learning Applications

no code implementations3 Feb 2021 Ronal Singh, Paul Dourish, Piers Howe, Tim Miller, Liz Sonenberg, Eduardo Velloso, Frank Vetere

This paper investigates the prospects of using directive explanations to assist people in achieving recourse of machine learning decisions.

BIG-bench Machine Learning counterfactual

Distal Explanations for Model-free Explainable Reinforcement Learning

no code implementations28 Jan 2020 Prashan Madumal, Tim Miller, Liz Sonenberg, Frank Vetere

In this paper we introduce and evaluate a distal explanation model for model-free reinforcement learning agents that can generate explanations for `why' and `why not' questions.

reinforcement-learning Reinforcement Learning (RL)

Explainable Reinforcement Learning Through a Causal Lens

2 code implementations27 May 2019 Prashan Madumal, Tim Miller, Liz Sonenberg, Frank Vetere

In this paper, we use causal models to derive causal explanations of behaviour of reinforcement learning agents.

counterfactual reinforcement-learning +3

A Grounded Interaction Protocol for Explainable Artificial Intelligence

no code implementations5 Mar 2019 Prashan Madumal, Tim Miller, Liz Sonenberg, Frank Vetere

Explainable Artificial Intelligence (XAI) systems need to include an explanation model to communicate the internal decisions, behaviours and actions to the interacting humans.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences

no code implementations2 Dec 2017 Tim Miller, Piers Howe, Liz Sonenberg

As a result, programmers design software for themselves, rather than for their target audience, a phenomenon he refers to as the `inmates running the asylum'.

Philosophy

Social planning for social HRI

no code implementations21 Feb 2016 Liz Sonenberg, Tim Miller, Adrian Pearce, Paolo Felli, Christian Muise, Frank Dignum

Making a computational agent 'social' has implications for how it perceives itself and the environment in which it is situated, including the ability to recognise the behaviours of others.

Cannot find the paper you are looking for? You can Submit a new open access paper.