The Implicit Bias of Benign Overfitting
By Ohad Shamir; Published in 2023; Volume 24, Issue 113: Pages 1-40
Abstract
Benign overfitting, a phenomenon where a predictor perfectly fits noisy training data while achieving near-optimal expected loss, has gained significant attention in recent years. However, its understanding is still limited to well-specified linear regression scenarios. This paper presents several new findings regarding the occurrence and limitations of benign overfitting in both regression and classification tasks. The study focuses on a generic data model for benign overfitting of linear predictors, where a fixed-dimensional input distribution is concatenated with a high-dimensional distribution. For linear regression, even in cases where it is not well-specified, it is proven that the minimum-norm interpolating predictor, which standard training methods converge to, is generally biased towards an inconsistent solution. Therefore, benign overfitting is unlikely to occur. Additionally, the paper extends this analysis beyond standard linear regression by demonstrating how the existence of benign overfitting on certain regression problems implies its absence on other regression problems. The study then shifts to classification problems and reveals a more favorable situation. It is proven that the max-margin predictor, which standard training methods are known to converge to, is asymptotically biased towards minimizing a weighted squared hinge loss. This finding allows for the reduction of the question of benign overfitting in classification to whether this loss is a suitable surrogate for measuring misclassification error, and it is used to demonstrate benign overfitting in new settings.
[abs]