Collaborative Filtering (CF) has been successfully utilized to assist users in discovering items of interest. However, existing CF methods are hindered by the issue of noisy data, which negatively affects the quality of recommendations. To address this problem, previous studies have employed adversarial learning to regulate user/item representations, thereby enhancing generalizability and robustness. These methods typically learn adversarial perturbations and model parameters through a min-max optimization framework. However, there are two major drawbacks: 1) Existing methods lack theoretical guarantees explaining why the addition of perturbations improves model generalizability and robustness; 2) Solving min-max optimization is time-consuming. Each iteration requires additional computations to update the perturbations, making them impractical for large-scale datasets in industry settings.
In this paper, we introduce Sharpness-aware Collaborative Filtering (SharpCF), a simple yet effective method that conducts adversarial training without incurring extra computational costs with respect to the base optimizer. To achieve this, we first revisit existing adversarial collaborative filtering and explore its connection with recent developments in Sharpness-aware Minimization. Our analysis reveals that adversarial training aims to find model parameters residing in neighborhoods surrounding the optimal model parameters with uniformly low loss values, thereby enhancing generalizability. To reduce computational overhead, SharpCF introduces a novel trajectory loss to measure the alignment between current weights and past weights. Experimental results on real-world datasets demonstrate that our SharpCF achieves superior performance with nearly zero additional computational cost compared to adversarial training.