A General Theory for Federated Optimization with Asynchronous and Heterogeneous Clients Updates
Authors: Yann Fraboni, Richard Vidal, Laetitia Kameni, Marco Lorenzi; Published in 2023, Volume 24, Issue 110, Pages 1-43.
Abstract
This study presents a novel framework for analyzing the optimization process of asynchronous federated learning with delays in gradient updates. The framework extends the existing FedAvg aggregation scheme by introducing stochastic aggregation weights that account for the variability in clients’ update times, which may be caused by differences in hardware capabilities. The proposed formalism applies to the general federated learning setting, where clients have heterogeneous datasets and perform at least one step of stochastic gradient descent (SGD). The study demonstrates the convergence of this framework and provides sufficient conditions for the minimum to be the optimal solution for the federated problem. Furthermore, the framework can be applied to existing optimization schemes such as centralized learning, FedAvg, asynchronous FedAvg, and FedBuff. The theoretical insights provided in this study can guide the design of federated learning experiments in heterogeneous conditions. Specifically, the study introduces FedFix, a novel extension of FedAvg that enables efficient asynchronous federated training while maintaining the convergence stability of synchronous aggregation. Empirical experiments validate the theory, showing that asynchronous FedAvg achieves fast convergence at the expense of stability, and demonstrate the improvements of FedFix over synchronous and asynchronous FedAvg.
[abs]