Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Google engineers work towards large scale federated learning

Save for later
  • 4 min read
  • 22 Feb 2019

article-image

In a paper published on February 4, Google engineers drafted out plans to forward federated learning at a scale. It showcases the high-level plans, challenges, solutions, and applications. Federated learning was first introduced in 2017 by Google. The idea is to use data from a number of computing devices like smartphones instead of a centralized data source.

Federated learning can help with privacy


Federated learning can be beneficial as it addresses the privacy concern. Android phones are used for the system where the data is only used but never uploaded to any server. A deep neural network is trained by using TensorFlow on the data stored in the Android phone. The Federated averaging algorithm by Brendan McMahan uses a similar approach as synchronous training. The weights of the neural network are combined in the cloud using Federated Averaging. This creates a global model which is then pushed back to the phones as results/desirable actions.

To enhance privacy approaches like differential privacy and Secure aggregation are taken. The paper addresses challenges like time zone differences, connectivity issues, interrupted execution etc,. Their work is mature enough to deploy the system in production for tens of millions of devices. They are working towards supporting billions of devices now.

The training protocol


The system involves devices and the Federated Learning server communicating availability and the server selecting devices to run a task. A subset of the available devices are selected for a task. The Federated Learning server instructs the devices what computing task to run with a plan. A plan would consist a TensorFlow graph and instructions to execute it. There are three phases for the training to take place:

  1. Selection of the devices that meet eligibility criteria
  2. Configuring the server with simple or Secure Aggregation
  3. Reporting from the devices where reaching a certain number would get the training round started


google-engineers-works-towards-large-scale-federated-learning-dub-it-federated-computing-img-0

Source: Towards Federated Learning at Scale: System Design

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £16.99/month. Cancel anytime


The devices are supposed to maintain a repository of the collected data and the applications are responsible to provide data to the Federated Learning runtime as an example store. The Federated Learning server is designed to operate on orders of many magnitudes. Each round can mean updates from devices in the range of KBs to tens of MBs coming going the server.

Data collection


To avoid harming the phone’s battery life and performance, various analytics are collected in the cloud. The logs don’t contain any personally identifiable information.

Secure aggregation


Secure aggregation uses encryption to make individual device updates uninspectable. They plant to use it for protection against threats in data centers. Secure aggregation would ensure data encryption even when it is in-memory.

Challenges of federated learning


Compared to a centralized dataset, federated learning poses a number of challenges. The training data is not inspectable, tooling is required to work with proxy data. Models cannot be run interactively and must be compiled to be deployed in the Federated Learning server. Model resource consumption and runtime compatibility also come into the picture when working with many devices in real-time.

Applications of Federated Learning


It is best for cases where the data on devices is more relevant than data on servers. Ranking items for better navigation, suggestions for on-device keyboard, and next word prediction. This has already been implemented on Google pixel and Gboard.

Future work is to eliminate bias caused be restrictions in device selection, algorithms to support better parallelism (more devices in one round), avoiding retraining already trained tasks on devices, and compression to save bandwidth.

Federated computation, not federated learning


The authors do no mention machine learning explicitly anywhere in the paper. They believe that the applications of such a model are not limited to machine learning. Federated Computation is the term they want to use for this concept.

Federated computation and edge computing


Federated learning and edge computing are very similar, there are but subtle differences in the purpose of these two. Federated learning is used to solve problems with specific tasks assigned to endpoint smartphones. Edge computing is for predefined tasks to be processed at end nodes, for example, IoT cameras. Federated learning decentralizes the data used while edge computing decentralizes the task computation to various devices.

For more details on the architecture and its working, you can check out the research paper.

Technical and hidden debts in machine learning – Google engineers’ give their perspective

Researchers introduce a machine learning model where the learning cannot be proved

What if AIs could collaborate using human-like values? DeepMind researchers propose a Hanabi platform.