Search Results for author: Antigoni Polychroniadou

Found 7 papers, 1 papers with code

Balancing Fairness and Accuracy in Data-Restricted Binary Classification

no code implementations12 Mar 2024 Zachary McBride Lazri, Danial Dervovic, Antigoni Polychroniadou, Ivan Brugere, Dana Dachman-Soled, Min Wu

Applications that deal with sensitive information may have restrictions placed on the data available to a machine learning (ML) classifier.

Attribute Binary Classification +1

Bounding the Excess Risk for Linear Models Trained on Marginal-Preserving, Differentially-Private, Synthetic Data

no code implementations6 Feb 2024 Yvonne Zhou, Mingyu Liang, Ivan Brugere, Dana Dachman-Soled, Danial Dervovic, Antigoni Polychroniadou, Min Wu

The growing use of machine learning (ML) has raised concerns that an ML model may reveal private information about an individual who has contributed to the training dataset.

A Canonical Data Transformation for Achieving Inter- and Within-group Fairness

no code implementations23 Oct 2023 Zachary McBride Lazri, Ivan Brugere, Xin Tian, Dana Dachman-Soled, Antigoni Polychroniadou, Danial Dervovic, Min Wu

The mapping is constructed to preserve the relative relationship between the scores obtained from the unprocessed feature vectors of individuals from the same demographic group, guaranteeing within-group fairness.

Fairness

Collusion Resistant Federated Learning with Oblivious Distributed Differential Privacy

no code implementations20 Feb 2022 David Byrd, Vaikkunth Mugunthan, Antigoni Polychroniadou, Tucker Hybinette Balch

Privacy-preserving federated learning enables a population of distributed clients to jointly learn a shared model while keeping client training data private, even from an untrusted server.

Federated Learning Privacy Preserving

Differentially Private Secure Multi-Party Computation for Federated Learning in Financial Applications

no code implementations12 Oct 2020 David Byrd, Antigoni Polychroniadou

This reduces the risk of exposing sensitive data, but it is still possible to reverse engineer information about a client's private data set from communicated model parameters.

Federated Learning Fraud Detection +1

CryptoCredit: Securely Training Fair Models

no code implementations9 Oct 2020 Leo de Castro, Jiahao Chen, Antigoni Polychroniadou

When developing models for regulated decision making, sensitive features like age, race and gender cannot be used and must be obscured from model developers to prevent bias.

Decision Making regression

Cannot find the paper you are looking for? You can Submit a new open access paper.