Search Results for author: Úlfar Erlingsson

Found 10 papers, 5 papers with code

Tempered Sigmoid Activations for Deep Learning with Differential Privacy

1 code implementation28 Jul 2020 Nicolas Papernot, Abhradeep Thakurta, Shuang Song, Steve Chien, Úlfar Erlingsson

Because learning sometimes involves sensitive data, machine learning algorithms have been extended to offer privacy for training data.

Privacy Preserving Privacy Preserving Deep Learning

Distribution Density, Tails, and Outliers in Machine Learning: Metrics and Applications

no code implementations29 Oct 2019 Nicholas Carlini, Úlfar Erlingsson, Nicolas Papernot

We develop techniques to quantify the degree to which a given (training or testing) example is an outlier in the underlying distribution.

Adversarial Robustness BIG-bench Machine Learning

That which we call private

no code implementations8 Aug 2019 Úlfar Erlingsson, Ilya Mironov, Ananth Raghunathan, Shuang Song

Instead, the definitions so named are the basis of refinements and more advanced analyses of the worst-case implications of attackers---without any change assumed in attackers' powers.

Amplification by Shuffling: From Local to Central Differential Privacy via Anonymity

no code implementations29 Nov 2018 Úlfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Abhradeep Thakurta

We study the collection of such statistics in the local differential privacy (LDP) model, and describe an algorithm whose privacy cost is polylogarithmic in the number of changes to a user's value.

The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks

no code implementations22 Feb 2018 Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, Dawn Song

This paper describes a testing methodology for quantitatively assessing the risk that rare or unique training-data sequences are unintentionally memorized by generative sequence models---a common type of machine-learning model.

On the Protection of Private Information in Machine Learning Systems: Two Recent Approaches

no code implementations26 Aug 2017 Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Nicolas Papernot, Kunal Talwar, Li Zhang

The recent, remarkable growth of machine learning has led to intense interest in the privacy of the data on which machine learning relies, and to new techniques for preserving privacy.

BIG-bench Machine Learning valid

Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data

8 code implementations18 Oct 2016 Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, Kunal Talwar

The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users.

Transfer Learning

Building a RAPPOR with the Unknown: Privacy-Preserving Learning of Associations and Data Dictionaries

1 code implementation4 Mar 2015 Giulia Fanti, Vasyl Pihur, Úlfar Erlingsson

Techniques based on randomized response enable the collection of potentially sensitive data from clients in a privacy-preserving manner with strong local differential privacy guarantees.

Cryptography and Security

RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response

1 code implementation25 Jul 2014 Úlfar Erlingsson, Vasyl Pihur, Aleksandra Korolova

Randomized Aggregatable Privacy-Preserving Ordinal Response, or RAPPOR, is a technology for crowdsourcing statistics from end-user client software, anonymously, with strong privacy guarantees.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.