FL algorithms
FL algorithms, such as FedSGD, FedAvg, and Adaptive Federated Optimization, play a crucial role in the distributed training of ML models while ensuring privacy and security. In this section, we will explore these algorithms and their key characteristics.
FedSGD
Federated stochastic gradient descent (FedSGD) is a fundamental algorithm used in FL. It extends the traditional SGD optimization method to the federated setting. In FedSGD, each client (entity) computes the gradients on its local data and sends them to the central server. The server aggregates the gradients and updates the global model parameters accordingly. FedSGD is efficient for large-scale distributed training but may suffer from issues related to non-IID data and communication efficiency.
Figure 6.7 – The FedSGD model weights exchange with the server
Let’s look at the FedSGD algorithm:
Server-side algorithm... |