Search Results for author: Eirini Ntoutsi

Found 35 papers, 14 papers with code

Sum of Group Error Differences: A Critical Examination of Bias Evaluation in Biometric Verification and a Dual-Metric Measure

no code implementations23 Apr 2024 Alaa Elobaid, Nathan Ramoly, Lara Younes, Symeon Papadopoulos, Eirini Ntoutsi, Ioannis Kompatsiaris

Biometric Verification (BV) systems often exhibit accuracy disparities across different demographic groups, leading to biases in BV applications.

Fairness

Effector: A Python package for regional explanations

1 code implementation3 Apr 2024 Vasilis Gkolemis, Christos Diou, Eirini Ntoutsi, Theodore Dalamagas, Bernd Bischl, Julia Herbinger, Giuseppe Casalicchio

Effector implements well-established global effect methods, assesses the heterogeneity of each method and, based on that, provides regional effects.

Towards Cohesion-Fairness Harmony: Contrastive Regularization in Individual Fair Graph Clustering

1 code implementation16 Feb 2024 Siamak Ghodsi, Seyed Amjad Seyedi, Eirini Ntoutsi

Conventional fair graph clustering methods face two primary challenges: i) They prioritize balanced clusters at the expense of cluster cohesion by imposing rigid constraints, ii) Existing methods of both individual and group-level fairness in graph partitioning mostly rely on eigen decompositions and thus, generally lack interpretability.

Clustering Fairness +2

FairBranch: Fairness Conflict Correction on Task-group Branches for Fair Multi-Task Learning

1 code implementation20 Oct 2023 Arjun Roy, Christos Koutlis, Symeon Papadopoulos, Eirini Ntoutsi

The generalization capacity of Multi-Task Learning (MTL) becomes limited when unrelated tasks negatively impact each other by updating shared parameters with conflicting gradients, resulting in negative transfer and a reduction in MTL accuracy compared to single-task learning (STL).

Fairness Multi-Task Learning

RHALE: Robust and Heterogeneity-aware Accumulated Local Effects

1 code implementation20 Sep 2023 Vasilis Gkolemis, Theodore Dalamagas, Eirini Ntoutsi, Christos Diou

RHALE quantifies the heterogeneity by considering the standard deviation of the local effects and automatically determines an optimal variable-size bin-splitting.

Affinity Clustering Framework for Data Debiasing Using Pairwise Distribution Discrepancy

1 code implementation2 Jun 2023 Siamak Ghodsi, Eirini Ntoutsi

This paper presents MASC, a data augmentation approach that leverages affinity clustering to balance the representation of non-protected and protected groups of a target dataset by utilizing instances of the same protected attributes from similar datasets that are categorized in the same cluster as the target dataset by sharing instances of the protected attribute.

Attribute Clustering +1

Multi-dimensional discrimination in Law and Machine Learning -- A comparative overview

no code implementations12 Feb 2023 Arjun Roy, Jan Horstmann, Eirini Ntoutsi

AI-driven decision-making can lead to discrimination against certain individuals or social groups based on protected characteristics/attributes such as race, gender, or age.

Attribute Decision Making +1

Explaining text classifiers through progressive neighborhood approximation with realistic samples

no code implementations11 Feb 2023 Yi Cai, Arthur Zimek, Eirini Ntoutsi, Gerhard Wunder

The importance of neighborhood construction in local explanation methods has been already highlighted in the literature.

A review of clustering models in educational data science towards fairness-aware learning

no code implementations9 Jan 2023 Tai Le Quy, Gunnar Friege, Eirini Ntoutsi

These models are believed to be practical tools for analyzing students' data and ensuring fairness in EDS.

Clustering Fairness

AdaCC: Cumulative Cost-Sensitive Boosting for Imbalanced Classification

1 code implementation17 Sep 2022 Vasileios Iosifidis, Symeon Papadopoulos, Bodo Rosenhahn, Eirini Ntoutsi

Class imbalance poses a major challenge for machine learning as most supervised learning models might exhibit bias towards the majority class and under-perform in the minority class.

Classification imbalanced classification

Power of Explanations: Towards automatic debiasing in hate speech detection

1 code implementation7 Sep 2022 Yi Cai, Arthur Zimek, Gerhard Wunder, Eirini Ntoutsi

Hate speech detection is a common downstream application of natural language processing (NLP) in the real world.

Fairness Hate Speech Detection

Evaluation of group fairness measures in student performance prediction problems

no code implementations22 Aug 2022 Tai Le Quy, Thi Huyen Nguyen, Gunnar Friege, Eirini Ntoutsi

Predicting students' academic performance is one of the key tasks of educational data mining (EDM).

Fairness

Context matters for fairness -- a case study on the effect of spatial distribution shifts

no code implementations23 Jun 2022 Siamak Ghodsi, Harith Alani, Eirini Ntoutsi

With the ever growing involvement of data-driven AI-based decision making technologies in our daily social lives, the fairness of these systems is becoming a crucial phenomenon.

Decision Making Fairness

Multiple Fairness and Cardinality constraints for Students-Topics Grouping Problem

no code implementations20 Jun 2022 Tai Le Quy, Gunnar Friege, Eirini Ntoutsi

Group work is a prevalent activity in educational settings, where students are often divided into topic-specific groups based on their preferences.

Attribute Fairness

Learning to Teach Fairness-aware Deep Multi-task Learning

no code implementations16 Jun 2022 Arjun Roy, Eirini Ntoutsi

We introduce the L2T-FMT algorithm that is a teacher-student network trained collaboratively; the student learns to solve the fair MTL problem while the teacher instructs the student to learn from either accuracy or fairness, depending on what is harder to learn for each task.

Fairness Multi-Task Learning

Attention Mechanism based Cognition-level Scene Understanding

no code implementations17 Apr 2022 Xuejiao Tang, Tai Le Quy, Eirini Ntoutsi, Kea Turner, Vasile Palade, Israat Haque, Peng Xu, Chris Brown, Wenbin Zhang

Given a question-image input, the Visual Commonsense Reasoning (VCR) model can predict an answer with the corresponding rationale, which requires inference ability from the real world.

Question Answering Scene Understanding +2

Parity-based Cumulative Fairness-aware Boosting

no code implementations4 Jan 2022 Vasileios Iosifidis, Arjun Roy, Eirini Ntoutsi

Data-driven AI systems can lead to discrimination on the basis of protected attributes like gender or race.

Fairness

A survey on datasets for fairness-aware machine learning

1 code implementation1 Oct 2021 Tai Le Quy, Arjun Roy, Vasileios Iosifidis, Wenbin Zhang, Eirini Ntoutsi

For a deeper understanding of bias in the datasets, we investigate the interesting relationships using exploratory analysis.

Attribute BIG-bench Machine Learning +2

XPROAX-Local explanations for text classification with progressive neighborhood approximation

1 code implementation30 Sep 2021 Yi Cai, Arthur Zimek, Eirini Ntoutsi

The importance of the neighborhood for training a local surrogate model to approximate the local decision boundary of a black box classifier has been already highlighted in the literature.

counterfactual text-classification +1

Online Fairness-Aware Learning with Imbalanced Data Streams

no code implementations13 Aug 2021 Vasileios Iosifidis, Wenbin Zhang, Eirini Ntoutsi

Data-driven learning algorithms are employed in many online applications, in which data become available over time, like network monitoring, stock price prediction, job applications, etc.

Fairness Stock Price Prediction +1

Interpretable Visual Understanding with Cognitive Attention Network

1 code implementation6 Aug 2021 Xuejiao Tang, Wenbin Zhang, Yi Yu, Kea Turner, Tyler Derr, Mengyu Wang, Eirini Ntoutsi

While image understanding on recognition-level has achieved remarkable advancements, reliable visual scene understanding requires comprehensive image understanding on recognition-level but also cognition-level, which calls for exploiting the multi-source information as well as learning different levels of understanding and extensive commonsense knowledge.

Scene Understanding Visual Commonsense Reasoning

A Survey on Bias in Visual Datasets

no code implementations16 Jul 2021 Simone Fabbrizzi, Symeon Papadopoulos, Eirini Ntoutsi, Ioannis Kompatsiaris

Hence, this work aims to: i) describe the biases that might manifest in visual datasets; ii) review the literature on methods for bias discovery and quantification in visual datasets; iii) discuss existing attempts to collect bias-aware visual datasets.

Multi-fairness under class-imbalance

no code implementations27 Apr 2021 Arjun Roy, Vasileios Iosifidis, Eirini Ntoutsi

Recent studies showed that datasets used in fairness-aware machine learning for multiple protected attributes (referred to as multi-discrimination hereafter) are often imbalanced.

Attribute Decision Making +1

Fair-Capacitated Clustering

no code implementations25 Apr 2021 Tai Le Quy, Arjun Roy, Gunnar Friege, Eirini Ntoutsi

To this end, we introduce the fair-capacitated clustering problem that partitions the data into clusters of similar instances while ensuring cluster fairness and balancing cluster cardinalities.

Clustering Fairness

Consequence-aware Sequential Counterfactual Generation

1 code implementation12 Apr 2021 Philip Naumann, Eirini Ntoutsi

Recently, methods have been proposed that also consider the order in which actions are applied, leading to the so-called sequential counterfactual generation problem.

counterfactual

Data augmentation for dealing with low sampling rates in NILM

no code implementations30 Mar 2021 Tai Le Quy, Sergej Zerr, Eirini Ntoutsi, Wolfgang Nejdl

An important step towards improving the performance of these energy disaggregation methods is to improve the quality of the data sets.

Data Augmentation

Drift-Aware Multi-Memory Model for Imbalanced Data Streams

no code implementations29 Dec 2020 Amir Abolfazli, Eirini Ntoutsi

Online class imbalance learning deals with data streams that are affected by both concept drift and class imbalance.

FairNN- Conjoint Learning of Fair Representations for Fair Decisions

1 code implementation5 Apr 2020 Tongxin Hu, Vasileios Iosifidis, Wentong Liao, Hang Zhang, Michael YingYang, Eirini Ntoutsi, Bodo Rosenhahn

In this paper, we propose FairNN a neural network that performs joint feature representation and classification for fairness-aware learning.

Classification Decision Making +3

FAE: A Fairness-Aware Ensemble Framework

no code implementations3 Feb 2020 Vasileios Iosifidis, Besnik Fetahu, Eirini Ntoutsi

In the post-processing step, we tackle the problem of class overlapping by shifting the decision boundary in the direction of fairness.

BIG-bench Machine Learning Decision Making +1

AdaFair: Cumulative Fairness Adaptive Boosting

1 code implementation17 Sep 2019 Vasileios Iosifidis, Eirini Ntoutsi

The widespread use of ML-based decision making in domains with high societal impact such as recidivism, job hiring and loan credit has raised a lot of concerns regarding potential discrimination.

Attribute Decision Making +1

Fairness-enhancing interventions in stream classification

no code implementations16 Jul 2019 Vasileios Iosifidis, Thi Ngoc Han Tran, Eirini Ntoutsi

The wide spread usage of automated data-driven decision support systems has raised a lot of concerns regarding accountability and fairness of the employed models in the absence of human supervision.

Classification Fairness +1

FAHT: An Adaptive Fairness-aware Decision Tree Classifier

1 code implementation16 Jul 2019 Wenbin Zhang, Eirini Ntoutsi

However, there is a growing concern about the accountability and fairness of the employed models by the fact that often the available historic data is intrinsically discriminatory, i. e., the proportion of members sharing one or more sensitive attributes is higher than the proportion in the population as a whole when receiving positive classification, which leads to a lack of fairness in decision support system.

Decision Making Fairness

Incremental Active Opinion Learning Over a Stream of Opinionated Documents

no code implementations3 Sep 2015 Max Zimmermann, Eirini Ntoutsi, Myra Spiliopoulou

In the experiments, we evaluate the classifier performance over time by varying: (a) the class distribution of the opinionated stream, while assuming that the set of the words in the vocabulary is fixed but their polarities may change with the class distribution; and (b) the number of unknown words arriving at each moment, while the class polarity may also change.

Active Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.