1 code implementation • 5 Oct 2022 • Alexandre Gilotte, Ahmed Ben Yahmed, David Rohde
Aggregating a dataset, then injecting some noise, is a simple and common way to release differentially private data. However, aggregated data -- even without noise -- is not an appropriate input for machine learning classifiers. In this work, we show how a new model, similar to a logistic regression, may be learned from aggregated data only by approximating the unobserved feature distribution with a maximum entropy hypothesis.