no code implementations • 5 Oct 2023 • Harsh Chaudhari, Giorgio Severi, Alina Oprea, Jonathan Ullman
The integration of machine learning (ML) in numerous critical applications introduces a range of privacy concerns for individuals who provide their datasets for model training.
no code implementations • 11 Sep 2023 • Edoardo Debenedetti, Giorgio Severi, Nicholas Carlini, Christopher A. Choquette-Choo, Matthew Jagielski, Milad Nasr, Eric Wallace, Florian Tramèr
Most current approaches for protecting privacy in machine learning (ML) assume that models exist in a vacuum, when in reality, ML models are part of larger systems that include components for training data filtering, output monitoring, and more.
no code implementations • 2 Jun 2023 • Giorgio Severi, Simona Boboila, Alina Oprea, John Holodnak, Kendra Kratkiewicz, Jason Matterer
As machine learning (ML) classifiers increasingly oversee the automated monitoring of network traffic, studying their resilience against adversarial attacks becomes critical.
no code implementations • 3 Mar 2023 • Sara Di Bartolomeo, Giorgio Severi, Victor Schetinger, Cody Dunne
Large language models (LLMs) have recently taken the world by storm.
1 code implementation • 27 Aug 2022 • Giorgio Severi, Matthew Jagielski, Gökberk Yar, Yuxuan Wang, Alina Oprea, Cristina Nita-Rotaru
Federated learning is a popular strategy for training models on distributed, sensitive data, while preserving data privacy.
1 code implementation • 24 Jun 2020 • Matthew Jagielski, Giorgio Severi, Niklas Pousette Harger, Alina Oprea
Poisoning attacks against machine learning induce adversarial modification of data used by a machine learning algorithm to selectively change its output when it is deployed.
2 code implementations • 2 Mar 2020 • Giorgio Severi, Jim Meyer, Scott Coull, Alina Oprea
Training pipelines for machine learning (ML) based malware classification often rely on crowdsourced threat feeds, exposing a natural attack injection point.