no code implementations • 12 Mar 2024 • Zachary McBride Lazri, Danial Dervovic, Antigoni Polychroniadou, Ivan Brugere, Dana Dachman-Soled, Min Wu
Applications that deal with sensitive information may have restrictions placed on the data available to a machine learning (ML) classifier.
no code implementations • 6 Feb 2024 • Yvonne Zhou, Mingyu Liang, Ivan Brugere, Dana Dachman-Soled, Danial Dervovic, Antigoni Polychroniadou, Min Wu
The growing use of machine learning (ML) has raised concerns that an ML model may reveal private information about an individual who has contributed to the training dataset.
no code implementations • 23 Oct 2023 • Zachary McBride Lazri, Ivan Brugere, Xin Tian, Dana Dachman-Soled, Antigoni Polychroniadou, Danial Dervovic, Min Wu
The mapping is constructed to preserve the relative relationship between the scores obtained from the unprocessed feature vectors of individuals from the same demographic group, guaranteeing within-group fairness.
1 code implementation • 19 Aug 2023 • Yiping Ma, Jess Woods, Sebastian Angel, Antigoni Polychroniadou, Tal Rabin
This paper introduces Flamingo, a system for secure aggregation of data across a large set of clients.
no code implementations • 20 Feb 2022 • David Byrd, Vaikkunth Mugunthan, Antigoni Polychroniadou, Tucker Hybinette Balch
Privacy-preserving federated learning enables a population of distributed clients to jointly learn a shared model while keeping client training data private, even from an untrusted server.
no code implementations • 12 Oct 2020 • David Byrd, Antigoni Polychroniadou
This reduces the risk of exposing sensitive data, but it is still possible to reverse engineer information about a client's private data set from communicated model parameters.
no code implementations • 9 Oct 2020 • Leo de Castro, Jiahao Chen, Antigoni Polychroniadou
When developing models for regulated decision making, sensitive features like age, race and gender cannot be used and must be obscured from model developers to prevent bias.