PAC-learning for Strategic Classification

Ravi Sundaram, Anil Vullikanti, Haifeng Xu, Fan Yao; 24(192):1−38, 2023.

Abstract

The recent attention in the study of strategic or adversarial manipulation of testing data to deceive a classifier has led to various research efforts. Previous works have primarily focused on two extreme scenarios: complete adversarial data points or data points that always prefer the positive label. This paper presents a generalized framework that encompasses both situations by considering strategic agents with heterogeneous preferences. It introduces the concept of strategic VC-dimension (SVC) to capture the PAC-learnability in this general strategic setup. SVC expands on the concept of adversarial VC-dimension (AVC) introduced by Cullina et al. (2018). The framework is instantiated for the strategic linear classification problem, fully characterizing the statistical learnability of linear classifiers by determining its SVC and the computational tractability by determining the complexity of the empirical risk minimization problem. Notably, the SVC of linear classifiers is always upper bounded by its standard VC-dimension. This characterization extends the AVC bound for linear classifiers in Cullina et al. (2018). The power of randomization in the strategic classification setup is also briefly explored. It is shown that randomization may significantly improve accuracy in general, but it does not provide any advantage in the case of adversarial classification with zero-manipulation-cost.

[abs]

[pdf][bib]