no code implementations • 1 Feb 2024 • Jack Furby, Daniel Cunnington, Dave Braines, Alun Preece
We hypothesise that this occurs when concept annotations are inaccurate or how input features should relate to concepts is unclear.
1 code implementation • 7 Feb 2023 • Jack Furby, Daniel Cunnington, Dave Braines, Alun Preece
Concept Bottleneck Models (CBMs) first map raw input(s) to a vector of human-defined concepts, before using this vector to predict a final classification.
1 code implementation • 1 Feb 2022 • Dimosthenis Antypas, Alun Preece, Jose Camacho-Collados
Social media has become extremely influential when it comes to policy making in modern societies, especially in the western world, where platforms such as Twitter allow users to follow politicians, thus making citizens more involved in political discussion.
no code implementations • 10 Dec 2021 • Mihai Boicu, Erik Blasch, Alun Preece
Proceedings of the AAAI Fall Symposium on Artificial Intelligence in Government and Public Sector, Washington, DC, USA, November 4-6, 2021
no code implementations • 17 Nov 2021 • Aleksandra Edwards, Asahi Ushio, Jose Camacho-Collados, Hélène de Ribaupierre, Alun Preece
Data augmentation techniques are widely used for enhancing the performance of machine learning models by tackling class imbalance issues and data sparsity.
no code implementations • 15 Oct 2021 • Marc Roig Vilamala, Tianwei Xing, Harrison Taylor, Luis Garcia, Mani Srivastava, Lance Kaplan, Alun Preece, Angelika Kimmig, Federico Cerutti
We also demonstrate that our approach is capable of training even with a dataset that has a moderate proportion of noisy data.
1 code implementation • 6 Aug 2021 • David Tuxworth, Dimosthenis Antypas, Luis Espinosa-Anke, Jose Camacho-Collados, Alun Preece, David Rogers
In particular, the analysis in centered on Twitter and disinformation for three European languages: English, French and Spanish.
no code implementations • ACL 2021 • Dimosthenis Antypas, Jose Camacho-Collados, Alun Preece, David Rogers
Social media is often used by individuals and organisations as a platform to spread misinformation.
no code implementations • 13 May 2021 • Iain Barclay, Alun Preece, Ian Taylor, Swapna K. Radha, Jarek Nabrzyski
Adopting shared data resources requires scientists to place trust in the originators of the data.
no code implementations • 5 Mar 2021 • Iain Barclay, Harrison Taylor, Alun Preece, Ian Taylor, Dinesh Verma, Geeth de Mel
Increased adoption of artificial intelligence (AI) systems into scientific workflows will result in an increasing technical debt as the distance between the data scientists and engineers who develop AI system components and scientists, researchers and other users grows.
no code implementations • COLING 2020 • Aleksandra Edwards, Jose Camacho-Collados, H{\'e}l{\`e}ne De Ribaupierre, Alun Preece
Pre-trained language models provide the foundations for state-of-the-art performance across a wide range of natural language processing tasks, including text classification.
no code implementations • 9 Nov 2020 • Frank Stein, Alun Preece
Proceedings of the AAAI Fall Symposium on Artificial Intelligence in Government and Public Sector, Washington, DC, USA, November 13-14, 2020
no code implementations • 27 Oct 2020 • Katie Barrett-Powell, Jack Furby, Liam Hiley, Marc Roig Vilamala, Harrison Taylor, Federico Cerutti, Alun Preece, Tianwei Xing, Luis Garcia, Mani Srivastava, Dave Braines
We present an experimentation platform for coalition situational understanding research that highlights capabilities in explainable artificial intelligence/machine learning (AI/ML) and integration of symbolic and subsymbolic AI/ML approaches for event processing.
BIG-bench Machine Learning Explainable artificial intelligence
no code implementations • 27 Oct 2020 • Aleksandra Edwards, David Rogers, Jose Camacho-Collados, Hélène de Ribaupierre, Alun Preece
The task of text and sentence classification is associated with the need for large amounts of labelled training data.
no code implementations • 7 Sep 2020 • Marc Roig Vilamala, Harrison Taylor, Tianwei Xing, Luis Garcia, Mani Srivastava, Lance Kaplan, Alun Preece, Angelika Kimmig, Federico Cerutti
We demonstrate this comparing our approach against a pure neural network approach on a dataset based on Urban Sounds 8K.
no code implementations • 31 Mar 2020 • Liam Hiley, Alun Preece, Yulia Hicks, Supriyo Chakraborty, Prudhvi Gurram, Richard Tomsett
Our results show that the selective relevance method can not only provide insight on the role played by motion in the model's decision -- in effect, revealing and quantifying the model's spatial bias -- but the method also simplifies the resulting explanations for human consumption.
no code implementations • 30 Mar 2020 • Sam Vente, Angelika Kimmig, Alun Preece, Federico Cerutti
In particular, we show our method significantly reduces the number of messages when an agreement is not possible.
no code implementations • 30 Mar 2020 • Sam Vente, Angelika Kimmig, Alun Preece, Federico Cerutti
Automated negotiation can be an efficient method for resolving conflict and redistributing resources in a coalition setting.
no code implementations • 29 Nov 2019 • Richard Tomsett, Dan Harborne, Supriyo Chakraborty, Prudhvi Gurram, Alun Preece
Despite a proliferation of such methods, little effort has been made to quantify how good these saliency maps are at capturing the true relevance of the pixels to the classifier output (i. e. their "fidelity").
no code implementations • 4 Nov 2019 • Frank Stein, Alun Preece
Proceedings of the AAAI Fall Symposium on Artificial Intelligence in Government and Public Sector, Arlington, Virginia, USA, November 7-8, 2019
no code implementations • 16 Oct 2019 • Alun Preece, Dave Braines, Federico Cerutti, Tien Pham
Central to the concept of multi-domain operations (MDO) is the utilization of an intelligence, surveillance, and reconnaissance (ISR) network consisting of overlapping systems of remote and autonomous sensors, and human intelligence, distributed among multiple partners.
no code implementations • 16 Sep 2019 • Alun Preece
Proceedings of the BMVC 2019 Workshop on Interpretable and Explainable Machine Vision, Cardiff, UK, September 12, 2019.
no code implementations • 7 Sep 2019 • Liam Hiley, Alun Preece, Yulia Hicks
This paper seeks to highlight the need for explainability methods designed with video deep learning models, and by association spatio-temporal input in mind, by first illustrating the cutting edge for video deep learning, and then noting the scarcity of research into explanations for these methods.
4 code implementations • 5 Aug 2019 • Liam Hiley, Alun Preece, Yulia Hicks, David Marshall, Harrison Taylor
However, by exploiting a simple technique that removes motion information, we show that it is not the case that this technique is effective as-is for representing relevance in non-image tasks.
no code implementations • 8 Jul 2019 • Iain Barclay, Alun Preece, Ian Taylor, Dinesh Verma
Increased adoption and deployment of machine learning (ML) models into business, healthcare and other organisational processes, will result in a growing disconnect between the engineers and researchers who developed the models and the model's users and other stakeholders, such as regulators or auditors.
no code implementations • 5 Jun 2019 • Laura D'Arcy, Padraig Corcoran, Alun Preece
We present a method to generate directed acyclic graphs (DAGs) using deep reinforcement learning, specifically deep Q-learning.
no code implementations • 14 Oct 2018 • Frank Stein, Alun Preece, Mihai Boicu
Proceedings of the AAAI Fall Symposium on Artificial Intelligence in Government and Public Sector, Arlington, Virginia, USA, October 18-20, 2018
no code implementations • 29 Sep 2018 • Alun Preece, Dan Harborne, Dave Braines, Richard Tomsett, Supriyo Chakraborty
There is general consensus that it is important for artificial intelligence (AI) and machine learning systems to be explainable and/or interpretable.
no code implementations • 28 Sep 2018 • Alun Preece, Rob Ashelford, Harry Armstrong, Dave Braines
Evaluation has always been a key challenge in the development of artificial intelligence (AI) based software, due to the technical complexity of the software artifact and, often, its embedding in complex sociotechnical processes.
no code implementations • 25 Sep 2018 • Iain Barclay, Alun Preece, Ian Taylor
Organisations are increasingly open to scrutiny, and need to be able to prove that they operate in a fair and ethical way.
no code implementations • 20 Sep 2018 • Lance Kaplan, Federico Cerutti, Murat Sensoy, Alun Preece, Paul Sullivan
This paper argues the need for research to realize uncertainty-aware artificial intelligence and machine learning (AI\&ML) systems for decision support by describing a number of motivating scenarios.
no code implementations • 20 Jun 2018 • Richard Tomsett, Dave Braines, Dan Harborne, Alun Preece, Supriyo Chakraborty
Several researchers have argued that a machine learning system's interpretability should be defined in relation to a specific agent or task: we should not ask if the system is interpretable, but to whom is it interpretable.
BIG-bench Machine Learning Interpretable Machine Learning +1