We detail a new framework for privacy preserving deep learning and discuss its assets.
To improve real-world applications of machine learning, experienced modelers develop intuition about their datasets, their models, and how the two interact.
We train a recurrent neural network language model using a distributed, on-device learning framework called federated learning for the purpose of next-word prediction in a virtual keyboard for smartphones.
Federated learning (FL) is a rapidly growing research field in machine learning.
We first show that, norm attack, a simple method that uses the norm of the communicated gradients between the parties, can largely reveal the ground-truth labels from the participants.
FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches.
However, the large model size impedes training on resource-constrained edge devices.
However, in many social network scenarios, centralized federated learning is not applicable (e. g., a central agent or server connecting all users may not exist, or the communication cost to the central server is not affordable).
Federated learning (FL) provides a promising approach to learning private language modeling for intelligent personalized keyboard suggestion by training models in distributed clients rather than training in a central server.
Modern federated networks, such as those comprised of wearable devices, mobile phones, or autonomous vehicles, generate massive amounts of data each day.
AUTONOMOUS VEHICLES FEDERATED LEARNING META-LEARNING MULTI-TASK LEARNING