The following content is about a paper titled “Learning Collaborative Information Dissemination with Graph-based Multi-Agent Reinforcement Learning” written by Raffaele Galliera and three other authors. The paper discusses the importance of efficient and reliable information dissemination in modern communication systems, particularly in domains like disaster response, autonomous vehicles, and sensor networks. The authors propose a Multi-Agent Reinforcement Learning (MARL) approach, which allows each agent to independently decide on message forwarding, thus decentralizing the process and making it more efficient and collaborative.

The paper introduces a Decentralized-POMDP formulation for information dissemination and utilizes Graph Convolutional Reinforcement Learning with Graph Attention Networks (GAT) to capture essential network features. The authors present two different approaches, L-DGN and HL-DGN, which differ in the information exchanged among agents. The performance of these decentralized approaches is compared with a widely-used Multi-Point Relay (MPR) heuristic, and the authors show that their trained policies effectively cover the network while bypassing the MPR set selection process. The paper highlights that this approach is a promising step towards strengthening the resilience of real-world broadcast communication infrastructures through learned and collaborative information dissemination.

The submission history of the paper indicates that it was first submitted on August 25, 2023, and the paper can be downloaded as a PDF from the provided link.