Search Results for author: Ivan Beschastnikh

Found 8 papers, 6 papers with code

Scalable Data Point Valuation in Decentralized Learning

1 code implementation1 May 2023 Konstantin D. Pandl, Chun-Yin Huang, Ivan Beschastnikh, Xiaoxiao Li, Scott Thiebes, Ali Sunyaev

The valuation of data points through DDVal allows to also draw hierarchical conclusions on the contribution of institutions, and we empirically show that the accuracy of DDVal in estimating institutional contributions is higher than existing Shapley value approximation methods for federated learning.

Data Valuation Federated Learning

GlueFL: Reconciling Client Sampling and Model Masking for Bandwidth Efficient Federated Learning

no code implementations3 Dec 2022 Shiqi He, Qifan Yan, Feijie Wu, Lanjun Wang, Mathias Lécuyer, Ivan Beschastnikh

Federated learning (FL) is an effective technique to directly involve edge devices in machine learning training while preserving client privacy.

Federated Learning Model Compression

Iroko: A Framework to Prototype Reinforcement Learning for Data Center Traffic Control

1 code implementation24 Dec 2018 Fabian Ruffy, Michael Przystupa, Ivan Beschastnikh

We present a new emulator, Iroko, which we developed to support different network topologies, congestion control algorithms, and deployment scenarios.

OpenAI Gym reinforcement-learning +1

Biscotti: A Ledger for Private and Secure Peer-to-Peer Machine Learning

2 code implementations24 Nov 2018 Muhammad Shayan, Clement Fung, Chris J. M. Yoon, Ivan Beschastnikh

Federated Learning is the current state of the art in supporting secure multi-party machine learning (ML): data is maintained on the owner's device and the updates to the model are aggregated through a secure protocol.

BIG-bench Machine Learning Federated Learning +1

Dancing in the Dark: Private Multi-Party Machine Learning in an Untrusted Setting

1 code implementation23 Nov 2018 Clement Fung, Jamie Koerner, Stewart Grant, Ivan Beschastnikh

Distributed machine learning (ML) systems today use an unsophisticated threat model: data sources must trust a central ML process.

BIG-bench Machine Learning Federated Learning

Mitigating Sybils in Federated Learning Poisoning

2 code implementations14 Aug 2018 Clement Fung, Chris J. M. Yoon, Ivan Beschastnikh

Unfortunately, such approaches are susceptible to a variety of attacks, including model poisoning, which is made substantially worse in the presence of sybils.

Federated Learning Model Poisoning

Cannot find the paper you are looking for? You can Submit a new open access paper.