no code implementations • 1 Feb 2024 • Jack Furby, Daniel Cunnington, Dave Braines, Alun Preece
We hypothesise that this occurs when concept annotations are inaccurate or how input features should relate to concepts is unclear.
1 code implementation • 7 Feb 2023 • Jack Furby, Daniel Cunnington, Dave Braines, Alun Preece
Concept Bottleneck Models (CBMs) first map raw input(s) to a vector of human-defined concepts, before using this vector to predict a final classification.
no code implementations • 27 Oct 2020 • Katie Barrett-Powell, Jack Furby, Liam Hiley, Marc Roig Vilamala, Harrison Taylor, Federico Cerutti, Alun Preece, Tianwei Xing, Luis Garcia, Mani Srivastava, Dave Braines
We present an experimentation platform for coalition situational understanding research that highlights capabilities in explainable artificial intelligence/machine learning (AI/ML) and integration of symbolic and subsymbolic AI/ML approaches for event processing.
BIG-bench Machine Learning Explainable artificial intelligence
no code implementations • 23 Oct 2020 • Dave Braines, Federico Cerutti, Marc Roig Vilamala, Mani Srivastava, Lance Kaplan Alun Preece, Gavin Pearson
Future coalition operations can be substantially augmented through agile teaming between human and machine agents, but in a coalition context these agents may be unfamiliar to the human users and expected to operate in a broad set of scenarios rather than being narrowly defined for particular purposes.
no code implementations • 16 Oct 2019 • Alun Preece, Dave Braines, Federico Cerutti, Tien Pham
Central to the concept of multi-domain operations (MDO) is the utilization of an intelligence, surveillance, and reconnaissance (ISR) network consisting of overlapping systems of remote and autonomous sensors, and human intelligence, distributed among multiple partners.
no code implementations • 13 Dec 2018 • Kun Tu, Jian Li, Don Towsley, Dave Braines, Liam Turner
In this paper, we explore the role of \emph{graphlets} in network classification for both static and temporal networks.
no code implementations • 29 Sep 2018 • Alun Preece, Dan Harborne, Dave Braines, Richard Tomsett, Supriyo Chakraborty
There is general consensus that it is important for artificial intelligence (AI) and machine learning systems to be explainable and/or interpretable.
no code implementations • 28 Sep 2018 • Alun Preece, Rob Ashelford, Harry Armstrong, Dave Braines
Evaluation has always been a key challenge in the development of artificial intelligence (AI) based software, due to the technical complexity of the software artifact and, often, its embedding in complex sociotechnical processes.
no code implementations • 10 Jul 2018 • Kun Tu, Jian Li, Don Towsley, Dave Braines, Liam D. Turner
Network classification has a variety of applications, such as detecting communities within networks and finding similarities between those representing different aspects of the real world.
no code implementations • 20 Jun 2018 • Richard Tomsett, Dave Braines, Dan Harborne, Alun Preece, Supriyo Chakraborty
Several researchers have argued that a machine learning system's interpretability should be defined in relation to a specific agent or task: we should not ask if the system is interpretable, but to whom is it interpretable.
BIG-bench Machine Learning Interpretable Machine Learning +1