no code implementations • 2 Feb 2024 • Thao Le, Tim Miller, Liz Sonenberg, Ronal Singh
Prior research on AI-assisted human decision-making has explored several different explainable AI (XAI) approaches.
no code implementations • 10 Mar 2023 • Thao Le, Tim Miller, Ronal Singh, Liz Sonenberg
In this paper, we show that counterfactual explanations of confidence scores help study participants to better understand and better trust a machine learning model's prediction.
no code implementations • 6 Jun 2022 • Thao Le, Tim Miller, Ronal Singh, Liz Sonenberg
In this paper, we show that counterfactual explanations of confidence scores help users better understand and better trust an AI model's prediction in human-subject studies.
no code implementations • 6 Oct 2021 • Christian Muise, Vaishak Belle, Paolo Felli, Sheila Mcilraith, Tim Miller, Adrian R. Pearce, Liz Sonenberg
Many AI applications involve the interaction of multiple autonomous agents, requiring those agents to reason about their own beliefs, as well as those of other agents.
no code implementations • 3 Feb 2021 • Ronal Singh, Paul Dourish, Piers Howe, Tim Miller, Liz Sonenberg, Eduardo Velloso, Frank Vetere
This paper investigates the prospects of using directive explanations to assist people in achieving recourse of machine learning decisions.
no code implementations • 28 Jan 2020 • Prashan Madumal, Tim Miller, Liz Sonenberg, Frank Vetere
In this paper we introduce and evaluate a distal explanation model for model-free reinforcement learning agents that can generate explanations for `why' and `why not' questions.
2 code implementations • 27 May 2019 • Prashan Madumal, Tim Miller, Liz Sonenberg, Frank Vetere
In this paper, we use causal models to derive causal explanations of behaviour of reinforcement learning agents.
no code implementations • 5 Mar 2019 • Prashan Madumal, Tim Miller, Liz Sonenberg, Frank Vetere
Explainable Artificial Intelligence (XAI) systems need to include an explanation model to communicate the internal decisions, behaviours and actions to the interacting humans.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 21 Jun 2018 • Prashan Madumal, Tim Miller, Frank Vetere, Liz Sonenberg
We carry out further analysis to identify the relationships between components and sequences and cycles that occur in a dialog.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 2 Dec 2017 • Tim Miller, Piers Howe, Liz Sonenberg
As a result, programmers design software for themselves, rather than for their target audience, a phenomenon he refers to as the `inmates running the asylum'.
no code implementations • 21 Feb 2016 • Liz Sonenberg, Tim Miller, Adrian Pearce, Paolo Felli, Christian Muise, Frank Dignum
Making a computational agent 'social' has implications for how it perceives itself and the environment in which it is situated, including the ability to recognise the behaviours of others.