Graph neural networks (GNNs) have been widely used in various applications such as social networks, recommendation systems, and online web services. However, GNNs are susceptible to adversarial attacks, which can significantly reduce their effectiveness. Previous approaches to adversarial attacks rely on gradient-based meta-learning to selectively modify a single edge with the highest attack score, but these methods are computationally expensive.

To address this issue, we propose a new attack method called Differentiable Graph Attack (DGA) that leverages continuous relaxation and parameterization of the graph structure. DGA efficiently generates effective attacks without the need for costly retraining. Compared to state-of-the-art methods, DGA achieves similar attack performance with significantly less training time and GPU memory usage on benchmark datasets.

We also conducted extensive experiments to analyze the transferability of DGA across different graph models and its robustness against commonly used defense mechanisms.