Benchmarking Graph Neural Networks – Rewrite
Vijay Prakash Dwivedi, Chaitanya K. Joshi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, Xavier Bresson; 24(43):1−48, 2023.
Abstract
Over the past few years, graph neural networks (GNNs) have become the standard tool for analyzing and learning from graph data. This field has seen significant advancements in techniques that have found success across various domains such as computer science, mathematics, biology, physics, and chemistry. However, for a field to be considered reliable and mainstream, it is crucial to develop benchmarks that can quantify progress. In March 2020, we introduced a benchmark framework that includes a diverse collection of mathematical and real-world graphs, allows for fair model comparison by enforcing the same parameter budget, provides an open-source, user-friendly, and reproducible code infrastructure, and facilitates experimentation with new theoretical ideas. As of December 2022, the GitHub repository for this benchmark has gained considerable traction with 2,000 stars and 380 forks, demonstrating its usefulness and widespread adoption in the GNN community. In this paper, we present an updated version of our benchmark, highlighting its key features and introducing a medium-sized molecular dataset called AQSOL, which is similar to the popular ZINC dataset but includes a real-world measured chemical target. We also discuss how this benchmark framework can be utilized to explore new GNN designs and gain insights. To demonstrate the value of our benchmark, we examine the case of graph positional encoding (PE) in GNNs, which was introduced through this benchmark and has since sparked interest in exploring more powerful PE techniques for Transformers and GNNs in a robust experimental setup.
[abs]
[code]