Search Results for author: Kacper Sokol

Found 23 papers, 12 papers with code

Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse

no code implementations8 Sep 2023 Edward A. Small, Jeffrey N. Clark, Christopher J. McWilliams, Kacper Sokol, Jeffrey Chan, Flora D. Salim, Raul Santos-Rodriguez

Counterfactuals operationalised through algorithmic recourse have become a powerful tool to make artificial intelligence systems explainable.

counterfactual

Navigating Explanatory Multiverse Through Counterfactual Path Geometry

1 code implementation5 Jun 2023 Kacper Sokol, Edward Small, Yueqing Xuan

Counterfactual explanations are the de facto standard when tasked with interpreting decisions of (opaque) predictive models.

Attribute counterfactual +1

(Un)reasonable Allure of Ante-hoc Interpretability for High-stakes Domains: Transparency Is Necessary but Insufficient for Comprehensibility

no code implementations4 Jun 2023 Kacper Sokol, Julia E. Vogt

Ante-hoc interpretability has become the holy grail of explainable artificial intelligence for high-stakes domains such as healthcare; however, this notion is elusive, lacks a widely-accepted definition and depends on the operational context.

Explainable artificial intelligence Navigate

Equalised Odds is not Equal Individual Odds: Post-processing for Group and Individual Fairness

1 code implementation19 Apr 2023 Edward A. Small, Kacper Sokol, Daniel Manning, Flora D. Salim, Jeffrey Chan

Group fairness is achieved by equalising prediction distributions between protected sub-populations; individual fairness requires treating similar individuals alike.

Fairness

More Is Less: When Do Recommenders Underperform for Data-rich Users?

no code implementations15 Apr 2023 Yueqing Xuan, Kacper Sokol, Jeffrey Chan, Mark Sanderson

Users of recommender systems tend to differ in their level of interaction with these algorithms, which may affect the quality of recommendations they receive and lead to undesirable performance disparity.

Recommendation Systems

Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann's Functional Theory of Communication

no code implementations7 Feb 2023 Bernard Keenan, Kacper Sokol

Over the past decade explainable artificial intelligence has evolved from a predominantly technical discipline into a field that is deeply intertwined with social sciences.

counterfactual Explainable artificial intelligence

What and How of Machine Learning Transparency: Building Bespoke Explainability Tools with Interoperable Algorithmic Components

no code implementations8 Sep 2022 Kacper Sokol, Alexander Hepburn, Raul Santos-Rodriguez, Peter Flach

Explainability techniques for data-driven predictive models based on artificial intelligence and machine learning algorithms allow us to better understand the operation of such systems and help to hold them accountable.

Explanation Generation

Simply Logical -- Intelligent Reasoning by Example (Fully Interactive Online Edition)

1 code implementation14 Aug 2022 Peter Flach, Kacper Sokol

"Simply Logical -- Intelligent Reasoning by Example" by Peter Flach was first published by John Wiley in 1994.

How Robust is your Fair Model? Exploring the Robustness of Diverse Fairness Strategies

1 code implementation11 Jul 2022 Edward Small, Wei Shao, Zeliang Zhang, Peihan Liu, Jeffrey Chan, Kacper Sokol, Flora Salim

Recent studies have shown that robustness (the ability for a model to perform well on unseen data) plays a significant role in the type of strategy that should be used when approaching a new problem and, hence, measuring the robustness of these strategies has become a fundamental problem.

Decision Making Fairness +1

Cross-model Fairness: Empirical Study of Fairness and Ethics Under Model Multiplicity

2 code implementations14 Mar 2022 Kacper Sokol, Meelis Kull, Jeffrey Chan, Flora Dilys Salim

While data-driven predictive models are a strictly technological construct, they may operate within a social context in which benign engineering choices entail implicit, indirect and unexpected real-life consequences.

Ethics Fairness

Explainability Is in the Mind of the Beholder: Establishing the Foundations of Explainable Artificial Intelligence

no code implementations29 Dec 2021 Kacper Sokol, Peter Flach

This approach allows us to define explainability as (logical) reasoning applied to transparent insights (into, possibly black-box, predictive systems) interpreted under background knowledge and placed within a specific context -- a process that engenders understanding in a selected group of explainees.

BIG-bench Machine Learning Explainable artificial intelligence +3

You Only Write Thrice: Creating Documents, Computational Notebooks and Presentations From a Single Source

1 code implementation2 Jul 2021 Kacper Sokol, Peter Flach

We offer a proof-of-concept workflow that composes Jupyter Book (an online document), Jupyter Notebook (a computational narrative) and reveal. js slides from a single markdown source file.

Management

Interpretable Representations in Explainable AI: From Theory to Practice

1 code implementation16 Aug 2020 Kacper Sokol, Peter Flach

Interpretable representations are the backbone of many explainers that target black-box predictive systems based on artificial intelligence and machine learning algorithms.

LIMEtree: Consistent and Faithful Surrogate Explanations of Multiple Classes

1 code implementation4 May 2020 Kacper Sokol, Peter Flach

Explainable machine learning provides tools to better understand predictive models and their decisions, but many such methods are limited to producing insights with respect to a single class.

counterfactual Image Classification +3

One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency

no code implementations27 Jan 2020 Kacper Sokol, Peter Flach

We address this challenge in this paper by discussing the promises of Interactive Machine Learning for improved transparency of black-box systems using the example of contrastive explanations -- a state-of-the-art approach to Interpretable Machine Learning.

BIG-bench Machine Learning counterfactual +1

Explainability Fact Sheets: A Framework for Systematic Assessment of Explainable Approaches

no code implementations11 Dec 2019 Kacper Sokol, Peter Flach

When used as a Work Sheet, our taxonomy can guide the development of new explainability approaches by aiding in their critical evaluation along the five proposed dimensions.

Explainable artificial intelligence

bLIMEy: Surrogate Prediction Explanations Beyond LIME

1 code implementation29 Oct 2019 Kacper Sokol, Alexander Hepburn, Raul Santos-Rodriguez, Peter Flach

Surrogate explainers of black-box machine learning predictions are of paramount importance in the field of eXplainable Artificial Intelligence since they can be applied to any type of data (images, text and tabular), are model-agnostic and are post-hoc (i. e., can be retrofitted).

Explainable artificial intelligence

FACE: Feasible and Actionable Counterfactual Explanations

1 code implementation20 Sep 2019 Rafael Poyiadzi, Kacper Sokol, Raul Santos-Rodriguez, Tijl De Bie, Peter Flach

First, a counterfactual example generated by the state-of-the-art systems is not necessarily representative of the underlying data distribution, and may therefore prescribe unachievable goals(e. g., an unsuccessful life insurance applicant with severe disability may be advised to do more sports).

counterfactual

FAT Forensics: A Python Toolbox for Algorithmic Fairness, Accountability and Transparency

3 code implementations11 Sep 2019 Kacper Sokol, Raul Santos-Rodriguez, Peter Flach

Today, artificial intelligence systems driven by machine learning algorithms can be in a position to take important, and sometimes legally binding, decisions about our everyday lives.

BIG-bench Machine Learning Fairness +1

HyperStream: a Workflow Engine for Streaming Data

1 code implementation7 Aug 2019 Tom Diethe, Meelis Kull, Niall Twomey, Kacper Sokol, Hao Song, Miquel Perello-Nieto, Emma Tonkin, Peter Flach

This paper describes HyperStream, a large-scale, flexible and robust software package, written in the Python language, for processing streaming data with workflow creation capabilities.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.