1 code implementation • 17 Aug 2020 • Buse Gul Atli, Yuxi Xia, Samuel Marchal, N. Asokan
In this paper, we present WAFFLE, the first approach to watermark DNN models trained using federated learning.
no code implementations • 11 Oct 2019 • Buse Gul Atli, Sebastian Szyller, Mika Juuti, Samuel Marchal, N. Asokan
However, model extraction attacks can steal the functionality of ML models using the information leaked to clients through the results returned via the API.
no code implementations • 8 Jun 2019 • Mika Juuti, Buse Gul Atli, N. Asokan
We investigate how an adversary can optimally use its query budget for targeted evasion attacks against deep neural networks in a black-box setting.
1 code implementation • 3 Jun 2019 • Sebastian Szyller, Buse Gul Atli, Samuel Marchal, N. Asokan
Existing watermarking schemes are ineffective against IP theft via model extraction since it is the adversary who trains the surrogate model.
no code implementations • 1 Mar 2018 • Buse Gul Atli, Alexander Jung
Many current approaches to the design of intrusion detection systems apply feature selection in a static, non-adaptive fashion.