1 code implementation • 28 Jul 2020 • Nicolas Papernot, Abhradeep Thakurta, Shuang Song, Steve Chien, Úlfar Erlingsson
Because learning sometimes involves sensitive data, machine learning algorithms have been extended to offer privacy for training data.
no code implementations • 29 Oct 2019 • Nicholas Carlini, Úlfar Erlingsson, Nicolas Papernot
We develop techniques to quantify the degree to which a given (training or testing) example is an outlier in the underlying distribution.
no code implementations • 8 Aug 2019 • Úlfar Erlingsson, Ilya Mironov, Ananth Raghunathan, Shuang Song
Instead, the definitions so named are the basis of refinements and more advanced analyses of the worst-case implications of attackers---without any change assumed in attackers' powers.
no code implementations • 29 Nov 2018 • Úlfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Abhradeep Thakurta
We study the collection of such statistics in the local differential privacy (LDP) model, and describe an algorithm whose privacy cost is polylogarithmic in the number of changes to a user's value.
3 code implementations • ICLR 2018 • Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Úlfar Erlingsson
Models and examples built with TensorFlow
no code implementations • 22 Feb 2018 • Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, Dawn Song
This paper describes a testing methodology for quantitatively assessing the risk that rare or unique training-data sequences are unintentionally memorized by generative sequence models---a common type of machine-learning model.
no code implementations • 26 Aug 2017 • Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Nicolas Papernot, Kunal Talwar, Li Zhang
The recent, remarkable growth of machine learning has led to intense interest in the privacy of the data on which machine learning relies, and to new techniques for preserving privacy.
8 code implementations • 18 Oct 2016 • Nicolas Papernot, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, Kunal Talwar
The approach combines, in a black-box fashion, multiple models trained with disjoint datasets, such as records from different subsets of users.
1 code implementation • 4 Mar 2015 • Giulia Fanti, Vasyl Pihur, Úlfar Erlingsson
Techniques based on randomized response enable the collection of potentially sensitive data from clients in a privacy-preserving manner with strong local differential privacy guarantees.
Cryptography and Security
1 code implementation • 25 Jul 2014 • Úlfar Erlingsson, Vasyl Pihur, Aleksandra Korolova
Randomized Aggregatable Privacy-Preserving Ordinal Response, or RAPPOR, is a technology for crowdsourcing statistics from end-user client software, anonymously, with strong privacy guarantees.
Cryptography and Security