1 code implementation • 5 Sep 2023 • Dudi Biton, Aditi Misra, Efrat Levy, Jaidip Kotak, Ron Bitton, Roei Schuster, Nicolas Papernot, Yuval Elovici, Ben Nassi
In our examination of the timing side-channel vulnerabilities associated with this algorithm, we identified the potential to enhance decision-based attacks.
no code implementations • 9 Jan 2023 • Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, Nicolas Papernot
FL is promoted as a privacy-enhancing technology (PET) that provides data minimization: data never "leaves" personal devices and users share only model updates with a server (e. g., a company) coordinating the distributed training.
no code implementations • 20 Dec 2022 • Roei Schuster, Jin Peng Zhou, Thorsten Eisenhofer, Paul Grubbs, Nicolas Papernot
We analyze the root causes of potentially-increased attack surface in learned systems and develop a framework for identifying vulnerabilities that stem from the use of ML.
1 code implementation • 7 Oct 2022 • Adi Haviv, Ido Cohen, Jacob Gidron, Roei Schuster, Yoav Goldberg, Mor Geva
In this work, we offer the first methodological framework for probing and characterizing recall of memorized sequences in transformer LMs.
1 code implementation • 22 Sep 2022 • Jiaqi Wang, Roei Schuster, Ilia Shumailov, David Lie, Nicolas Papernot
When learning from sensitive data, care must be taken to ensure that training algorithms address privacy concerns.
1 code implementation • 6 Dec 2021 • Franziska Boenisch, Adam Dziedzic, Roei Schuster, Ali Shahin Shamsabadi, Ilia Shumailov, Nicolas Papernot
Instead, these devices share gradients, parameters, or other model updates, with a central party (e. g., a company) coordinating the training.
1 code implementation • EMNLP 2021 • Mor Geva, Roei Schuster, Jonathan Berant, Omer Levy
Feed-forward layers constitute two-thirds of a transformer model's parameters, yet their role in the network remains under-explored.
no code implementations • 5 Jul 2020 • Roei Schuster, Congzheng Song, Eran Tromer, Vitaly Shmatikov
We demonstrate that neural code autocompleters are vulnerable to poisoning attacks.
no code implementations • NeurIPS 2020 • Zhen Sun, Roei Schuster, Vitaly Shmatikov
Components of machine learning systems are not (yet) perceived as security hotspots.
no code implementations • 14 Jan 2020 • Roei Schuster, Tal Schuster, Yoav Meri, Vitaly Shmatikov
Word embeddings, i. e., low-dimensional vector representations such as GloVe and SGNS, encode word "meaning" in the sense that distances between words' vectors correspond to their semantic proximity.
no code implementations • CL 2020 • Tal Schuster, Roei Schuster, Darsh J Shah, Regina Barzilay
Recent developments in neural language models (LMs) have raised concerns about their potential misuse for automatically spreading misinformation.