Search Results for author: Dave Braines

Found 11 papers, 1 papers with code

Can we Constrain Concept Bottleneck Models to Learn Semantically Meaningful Input Features?

no code implementations1 Feb 2024 Jack Furby, Daniel Cunnington, Dave Braines, Alun Preece

We hypothesise that this occurs when concept annotations are inaccurate or how input features should relate to concepts is unclear.

Towards a Deeper Understanding of Concept Bottleneck Models Through End-to-End Explanation

1 code implementation7 Feb 2023 Jack Furby, Daniel Cunnington, Dave Braines, Alun Preece

Concept Bottleneck Models (CBMs) first map raw input(s) to a vector of human-defined concepts, before using this vector to predict a final classification.

An Experimentation Platform for Explainable Coalition Situational Understanding

no code implementations27 Oct 2020 Katie Barrett-Powell, Jack Furby, Liam Hiley, Marc Roig Vilamala, Harrison Taylor, Federico Cerutti, Alun Preece, Tianwei Xing, Luis Garcia, Mani Srivastava, Dave Braines

We present an experimentation platform for coalition situational understanding research that highlights capabilities in explainable artificial intelligence/machine learning (AI/ML) and integration of symbolic and subsymbolic AI/ML approaches for event processing.

BIG-bench Machine Learning Explainable artificial intelligence

Towards human-agent knowledge fusion (HAKF) in support of distributed coalition teams

no code implementations23 Oct 2020 Dave Braines, Federico Cerutti, Marc Roig Vilamala, Mani Srivastava, Lance Kaplan Alun Preece, Gavin Pearson

Future coalition operations can be substantially augmented through agile teaming between human and machine agents, but in a coalition context these agents may be unfamiliar to the human users and expected to operate in a broad set of scenarios rather than being narrowly defined for particular purposes.

Explainable AI for Intelligence Augmentation in Multi-Domain Operations

no code implementations16 Oct 2019 Alun Preece, Dave Braines, Federico Cerutti, Tien Pham

Central to the concept of multi-domain operations (MDO) is the utilization of an intelligence, surveillance, and reconnaissance (ISR) network consisting of overlapping systems of remote and autonomous sensors, and human intelligence, distributed among multiple partners.

Decision Making

Learning Features of Network Structures Using Graphlets

no code implementations13 Dec 2018 Kun Tu, Jian Li, Don Towsley, Dave Braines, Liam Turner

In this paper, we explore the role of \emph{graphlets} in network classification for both static and temporal networks.

General Classification Learning Network Representations +1

Stakeholders in Explainable AI

no code implementations29 Sep 2018 Alun Preece, Dan Harborne, Dave Braines, Richard Tomsett, Supriyo Chakraborty

There is general consensus that it is important for artificial intelligence (AI) and machine learning systems to be explainable and/or interpretable.

Hows and Whys of Artificial Intelligence for Public Sector Decisions: Explanation and Evaluation

no code implementations28 Sep 2018 Alun Preece, Rob Ashelford, Harry Armstrong, Dave Braines

Evaluation has always been a key challenge in the development of artificial intelligence (AI) based software, due to the technical complexity of the software artifact and, often, its embedding in complex sociotechnical processes.

Network Classification in Temporal Networks Using Motifs

no code implementations10 Jul 2018 Kun Tu, Jian Li, Don Towsley, Dave Braines, Liam D. Turner

Network classification has a variety of applications, such as detecting communities within networks and finding similarities between those representing different aspects of the real world.

Classification General Classification

Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems

no code implementations20 Jun 2018 Richard Tomsett, Dave Braines, Dan Harborne, Alun Preece, Supriyo Chakraborty

Several researchers have argued that a machine learning system's interpretability should be defined in relation to a specific agent or task: we should not ask if the system is interpretable, but to whom is it interpretable.

BIG-bench Machine Learning Interpretable Machine Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.