Chapter 10: Federated Learning and Edge Devices
When discussing DNN training, we mainly focus on using high-performance computers with accelerators such as GPUs or traditional data centers. Federated learning takes a different approach, trying to train models on edge devices, which usually have much less computation power compared with GPUs.
Before we discuss anything further, we want to list our assumptions:
- We assume the computation power of mobile chips is much less than traditional hardware accelerators such as GPUs/TPUs.
- We assume mobile devices often have a limited computation budget due to the limited battery power.
- We assume the model training/serving platform for a mobile device will be different from the model training/serving platform for data centers.
- We assume users are not willing to directly share their local personal data with the service provider.
- We assume the communication bandwidth between mobile devices and the service provider is limited...