Search Results for author: Alun Preece

Found 32 papers, 4 papers with code

Can we Constrain Concept Bottleneck Models to Learn Semantically Meaningful Input Features?

no code implementations1 Feb 2024 Jack Furby, Daniel Cunnington, Dave Braines, Alun Preece

We hypothesise that this occurs when concept annotations are inaccurate or how input features should relate to concepts is unclear.

Towards a Deeper Understanding of Concept Bottleneck Models Through End-to-End Explanation

1 code implementation7 Feb 2023 Jack Furby, Daniel Cunnington, Dave Braines, Alun Preece

Concept Bottleneck Models (CBMs) first map raw input(s) to a vector of human-defined concepts, before using this vector to predict a final classification.

Negativity Spreads Faster: A Large-Scale Multilingual Twitter Analysis on the Role of Sentiment in Political Communication

1 code implementation1 Feb 2022 Dimosthenis Antypas, Alun Preece, Jose Camacho-Collados

Social media has become extremely influential when it comes to policy making in modern societies, especially in the western world, where platforms such as Twitter allow users to follow politicians, thus making citizens more involved in political discussion.

Sentiment Analysis

AAAI FSS-21: Artificial Intelligence in Government and Public Sector Proceedings

no code implementations10 Dec 2021 Mihai Boicu, Erik Blasch, Alun Preece

Proceedings of the AAAI Fall Symposium on Artificial Intelligence in Government and Public Sector, Washington, DC, USA, November 4-6, 2021

Guiding Generative Language Models for Data Augmentation in Few-Shot Text Classification

no code implementations17 Nov 2021 Aleksandra Edwards, Asahi Ushio, Jose Camacho-Collados, Hélène de Ribaupierre, Alun Preece

Data augmentation techniques are widely used for enhancing the performance of machine learning models by tackling class imbalance issues and data sparsity.

Active Learning Data Augmentation +2

Using DeepProbLog to perform Complex Event Processing on an Audio Stream

no code implementations15 Oct 2021 Marc Roig Vilamala, Tianwei Xing, Harrison Taylor, Luis Garcia, Mani Srivastava, Lance Kaplan, Alun Preece, Angelika Kimmig, Federico Cerutti

We also demonstrate that our approach is capable of training even with a dataset that has a moderate proportion of noisy data.

Deriving Disinformation Insights from Geolocalized Twitter Callouts

1 code implementation6 Aug 2021 David Tuxworth, Dimosthenis Antypas, Luis Espinosa-Anke, Jose Camacho-Collados, Alun Preece, David Rogers

In particular, the analysis in centered on Twitter and disinformation for three European languages: English, French and Spanish.

Language Modelling Specificity +1

A framework for fostering transparency in shared artificial intelligence models by increasing visibility of contributions

no code implementations5 Mar 2021 Iain Barclay, Harrison Taylor, Alun Preece, Ian Taylor, Dinesh Verma, Geeth de Mel

Increased adoption of artificial intelligence (AI) systems into scientific workflows will result in an increasing technical debt as the distance between the data scientists and engineers who develop AI system components and scientists, researchers and other users grows.

Go Simple and Pre-Train on Domain-Specific Corpora: On the Role of Training Data for Text Classification

no code implementations COLING 2020 Aleksandra Edwards, Jose Camacho-Collados, H{\'e}l{\`e}ne De Ribaupierre, Alun Preece

Pre-trained language models provide the foundations for state-of-the-art performance across a wide range of natural language processing tasks, including text classification.

Language Modelling text-classification +2

AAAI FSS-20: Artificial Intelligence in Government and Public Sector Proceedings

no code implementations9 Nov 2020 Frank Stein, Alun Preece

Proceedings of the AAAI Fall Symposium on Artificial Intelligence in Government and Public Sector, Washington, DC, USA, November 13-14, 2020

An Experimentation Platform for Explainable Coalition Situational Understanding

no code implementations27 Oct 2020 Katie Barrett-Powell, Jack Furby, Liam Hiley, Marc Roig Vilamala, Harrison Taylor, Federico Cerutti, Alun Preece, Tianwei Xing, Luis Garcia, Mani Srivastava, Dave Braines

We present an experimentation platform for coalition situational understanding research that highlights capabilities in explainable artificial intelligence/machine learning (AI/ML) and integration of symbolic and subsymbolic AI/ML approaches for event processing.

BIG-bench Machine Learning Explainable artificial intelligence

A Hybrid Neuro-Symbolic Approach for Complex Event Processing

no code implementations7 Sep 2020 Marc Roig Vilamala, Harrison Taylor, Tianwei Xing, Luis Garcia, Mani Srivastava, Lance Kaplan, Alun Preece, Angelika Kimmig, Federico Cerutti

We demonstrate this comparing our approach against a pure neural network approach on a dataset based on Urban Sounds 8K.

8k

Explaining Motion Relevance for Activity Recognition in Video Deep Learning Models

no code implementations31 Mar 2020 Liam Hiley, Alun Preece, Yulia Hicks, Supriyo Chakraborty, Prudhvi Gurram, Richard Tomsett

Our results show that the selective relevance method can not only provide insight on the role played by motion in the model's decision -- in effect, revealing and quantifying the model's spatial bias -- but the method also simplifies the resulting explanations for human consumption.

Activity Recognition

Increasing negotiation performance at the edge of the network

no code implementations30 Mar 2020 Sam Vente, Angelika Kimmig, Alun Preece, Federico Cerutti

In particular, we show our method significantly reduces the number of messages when an agreement is not possible.

The current state of automated negotiation theory: a literature review

no code implementations30 Mar 2020 Sam Vente, Angelika Kimmig, Alun Preece, Federico Cerutti

Automated negotiation can be an efficient method for resolving conflict and redistributing resources in a coalition setting.

Sanity Checks for Saliency Metrics

no code implementations29 Nov 2019 Richard Tomsett, Dan Harborne, Supriyo Chakraborty, Prudhvi Gurram, Alun Preece

Despite a proliferation of such methods, little effort has been made to quantify how good these saliency maps are at capturing the true relevance of the pixels to the classifier output (i. e. their "fidelity").

AAAI FSS-19: Artificial Intelligence in Government and Public Sector Proceedings

no code implementations4 Nov 2019 Frank Stein, Alun Preece

Proceedings of the AAAI Fall Symposium on Artificial Intelligence in Government and Public Sector, Arlington, Virginia, USA, November 7-8, 2019

Explainable AI for Intelligence Augmentation in Multi-Domain Operations

no code implementations16 Oct 2019 Alun Preece, Dave Braines, Federico Cerutti, Tien Pham

Central to the concept of multi-domain operations (MDO) is the utilization of an intelligence, surveillance, and reconnaissance (ISR) network consisting of overlapping systems of remote and autonomous sensors, and human intelligence, distributed among multiple partners.

Decision Making

BMVC 2019: Workshop on Interpretable and Explainable Machine Vision

no code implementations16 Sep 2019 Alun Preece

Proceedings of the BMVC 2019 Workshop on Interpretable and Explainable Machine Vision, Cardiff, UK, September 12, 2019.

Explainable Deep Learning for Video Recognition Tasks: A Framework & Recommendations

no code implementations7 Sep 2019 Liam Hiley, Alun Preece, Yulia Hicks

This paper seeks to highlight the need for explainability methods designed with video deep learning models, and by association spatio-temporal input in mind, by first illustrating the cutting edge for video deep learning, and then noting the scarcity of research into explanations for these methods.

Video Recognition

Discriminating Spatial and Temporal Relevance in Deep Taylor Decompositions for Explainable Activity Recognition

4 code implementations5 Aug 2019 Liam Hiley, Alun Preece, Yulia Hicks, David Marshall, Harrison Taylor

However, by exploiting a simple technique that removes motion information, we show that it is not the case that this technique is effective as-is for representing relevance in non-image tasks.

Action Recognition

Quantifying Transparency of Machine Learning Systems through Analysis of Contributions

no code implementations8 Jul 2019 Iain Barclay, Alun Preece, Ian Taylor, Dinesh Verma

Increased adoption and deployment of machine learning (ML) models into business, healthcare and other organisational processes, will result in a growing disconnect between the engineers and researchers who developed the models and the model's users and other stakeholders, such as regulators or auditors.

BIG-bench Machine Learning

Deep Q-Learning for Directed Acyclic Graph Generation

no code implementations5 Jun 2019 Laura D'Arcy, Padraig Corcoran, Alun Preece

We present a method to generate directed acyclic graphs (DAGs) using deep reinforcement learning, specifically deep Q-learning.

Graph Generation Q-Learning +2

AAAI FSS-18: Artificial Intelligence in Government and Public Sector Proceedings

no code implementations14 Oct 2018 Frank Stein, Alun Preece, Mihai Boicu

Proceedings of the AAAI Fall Symposium on Artificial Intelligence in Government and Public Sector, Arlington, Virginia, USA, October 18-20, 2018

Stakeholders in Explainable AI

no code implementations29 Sep 2018 Alun Preece, Dan Harborne, Dave Braines, Richard Tomsett, Supriyo Chakraborty

There is general consensus that it is important for artificial intelligence (AI) and machine learning systems to be explainable and/or interpretable.

Hows and Whys of Artificial Intelligence for Public Sector Decisions: Explanation and Evaluation

no code implementations28 Sep 2018 Alun Preece, Rob Ashelford, Harry Armstrong, Dave Braines

Evaluation has always been a key challenge in the development of artificial intelligence (AI) based software, due to the technical complexity of the software artifact and, often, its embedding in complex sociotechnical processes.

Defining the Collective Intelligence Supply Chain

no code implementations25 Sep 2018 Iain Barclay, Alun Preece, Ian Taylor

Organisations are increasingly open to scrutiny, and need to be able to prove that they operate in a fair and ethical way.

Fairness

Uncertainty Aware AI ML: Why and How

no code implementations20 Sep 2018 Lance Kaplan, Federico Cerutti, Murat Sensoy, Alun Preece, Paul Sullivan

This paper argues the need for research to realize uncertainty-aware artificial intelligence and machine learning (AI\&ML) systems for decision support by describing a number of motivating scenarios.

BIG-bench Machine Learning

Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems

no code implementations20 Jun 2018 Richard Tomsett, Dave Braines, Dan Harborne, Alun Preece, Supriyo Chakraborty

Several researchers have argued that a machine learning system's interpretability should be defined in relation to a specific agent or task: we should not ask if the system is interpretable, but to whom is it interpretable.

BIG-bench Machine Learning Interpretable Machine Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.