no code implementations • 29 Oct 2021 • Abhin Shah, Wei-Ning Chen, Johannes Balle, Peter Kairouz, Lucas Theis
Compressing the output of \epsilon-locally differentially private (LDP) randomizers naively leads to suboptimal utility.
1 code implementation • 3 Mar 2019 • Jasmine Collins, Johannes Balle, Jonathon Shlens
We find that a standardization loss accelerates training on both small- and large-scale image classification experiments, works with a variety of architectures, and is largely robust to training across different batch sizes.