The popularity of decentralized federated learning (DFL) has increased due to its practicality in various applications. However, training a shared model among a large number of nodes in DFL is more challenging compared to the centralized version. This is because there is no central server to coordinate the training process. This challenge becomes more prominent when distributed nodes have limitations in communication or computational resources, leading to extremely inefficient and unstable training in DFL. In this paper, we address these challenges by proposing a novel algorithm based on the inexact alternating direction method (iADM) framework.
Our algorithm aims to train a shared model with a sparsity constraint. This constraint allows us to leverage one-bit compressive sensing (1BCS), enabling the transmission of one-bit information among neighbor nodes. Additionally, communication between neighbor nodes only occurs at certain steps, reducing the number of communication rounds and improving communication efficiency. Moreover, each node selects only a subset of neighbors to participate in the training, making the algorithm robust against stragglers.
Furthermore, our algorithm achieves high computational efficiency by computing complex items only once for several consecutive steps and solving subproblems inexactly using closed-form solutions. We have conducted numerical experiments that demonstrate the effectiveness of our algorithm in terms of both communication and computation.
Overall, our algorithm presents a solution to the challenges faced in training a shared model among distributed nodes in DFL. It improves communication efficiency, handles limitations in communication and computational resources, and showcases high computational efficiency.