no code implementations • 15 Oct 2020 • Charvi Rastogi, Yunfeng Zhang, Dennis Wei, Kush R. Varshney, Amit Dhurandhar, Richard Tomsett
We, then, conduct a second user experiment which shows that our time allocation strategy with explanation can effectively de-anchor the human and improve collaborative performance when the AI model has low confidence and is incorrect.
no code implementations • 31 Mar 2020 • Liam Hiley, Alun Preece, Yulia Hicks, Supriyo Chakraborty, Prudhvi Gurram, Richard Tomsett
Our results show that the selective relevance method can not only provide insight on the role played by motion in the model's decision -- in effect, revealing and quantifying the model's spatial bias -- but the method also simplifies the resulting explanations for human consumption.
no code implementations • 29 Nov 2019 • Richard Tomsett, Dan Harborne, Supriyo Chakraborty, Prudhvi Gurram, Alun Preece
Despite a proliferation of such methods, little effort has been made to quantify how good these saliency maps are at capturing the true relevance of the pixels to the classifier output (i. e. their "fidelity").
no code implementations • 3 Sep 2019 • David Mott, Richard Tomsett
The Lucid methods described by Olah et al. (2018) provide a way to inspect the inner workings of neural networks trained on image classification tasks using feature visualization.
no code implementations • 29 Sep 2018 • Alun Preece, Dan Harborne, Dave Braines, Richard Tomsett, Supriyo Chakraborty
There is general consensus that it is important for artificial intelligence (AI) and machine learning systems to be explainable and/or interpretable.
no code implementations • 20 Jun 2018 • Richard Tomsett, Dave Braines, Dan Harborne, Alun Preece, Supriyo Chakraborty
Several researchers have argued that a machine learning system's interpretability should be defined in relation to a specific agent or task: we should not ask if the system is interpretable, but to whom is it interpretable.
BIG-bench Machine Learning Interpretable Machine Learning +1