Search Results for author: Giorgio Severi

Found 7 papers, 3 papers with code

Chameleon: Increasing Label-Only Membership Leakage with Adaptive Poisoning

no code implementations5 Oct 2023 Harsh Chaudhari, Giorgio Severi, Alina Oprea, Jonathan Ullman

The integration of machine learning (ML) in numerous critical applications introduces a range of privacy concerns for individuals who provide their datasets for model training.

Data Poisoning

Privacy Side Channels in Machine Learning Systems

no code implementations11 Sep 2023 Edoardo Debenedetti, Giorgio Severi, Nicholas Carlini, Christopher A. Choquette-Choo, Matthew Jagielski, Milad Nasr, Eric Wallace, Florian Tramèr

Most current approaches for protecting privacy in machine learning (ML) assume that models exist in a vacuum, when in reality, ML models are part of larger systems that include components for training data filtering, output monitoring, and more.

Poisoning Network Flow Classifiers

no code implementations2 Jun 2023 Giorgio Severi, Simona Boboila, Alina Oprea, John Holodnak, Kendra Kratkiewicz, Jason Matterer

As machine learning (ML) classifiers increasingly oversee the automated monitoring of network traffic, studying their resilience against adversarial attacks becomes critical.

Network-Level Adversaries in Federated Learning

1 code implementation27 Aug 2022 Giorgio Severi, Matthew Jagielski, Gökberk Yar, Yuxuan Wang, Alina Oprea, Cristina Nita-Rotaru

Federated learning is a popular strategy for training models on distributed, sensitive data, while preserving data privacy.

Federated Learning

Subpopulation Data Poisoning Attacks

1 code implementation24 Jun 2020 Matthew Jagielski, Giorgio Severi, Niklas Pousette Harger, Alina Oprea

Poisoning attacks against machine learning induce adversarial modification of data used by a machine learning algorithm to selectively change its output when it is deployed.

BIG-bench Machine Learning Data Poisoning

Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers

2 code implementations2 Mar 2020 Giorgio Severi, Jim Meyer, Scott Coull, Alina Oprea

Training pipelines for machine learning (ML) based malware classification often rely on crowdsourced threat feeds, exposing a natural attack injection point.

BIG-bench Machine Learning General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.