Generalization Bounds for Adversarial Contrastive Learning
Xin Zou, Weiwei Liu; 24(114):1−54, 2023.
Abstract
Adversarial attacks pose a significant challenge to deep networks, and adversarial training has emerged as a popular method to train robust models. Recently, adversarial training has been applied to contrastive learning, known as Adversarial Contrastive Learning (ACL), to leverage unlabeled data and achieve promising robust performance. However, the theoretical understanding of ACL is limited. In this study, we analyze the generalization performance of ACL using Rademacher complexity, focusing on linear models and multi-layer neural networks under ℓp attack (p ≥ 1). Our theory demonstrates that the average adversarial risk of downstream tasks can be bounded by the adversarial unsupervised risk of the upstream task. Experimental results support our theoretical findings.
[abs]