Non-Asymptotic Guarantees for Robust Statistical Learning under Infinite Variance Assumption

Lihu Xu, Fang Yao, Qiuran Yao, Huiming Zhang; 24(92):1−46, 2023.

Abstract

The field of statistics and machine learning has seen a growing interest in developing robust estimators for models with heavy-tailed and bounded variance data. However, very few works have considered the case of unbounded variance. This paper introduces two types of robust estimators: the ridge log-truncated M-estimator and the elastic net log-truncated M-estimator. The first estimator is used for convex regressions, such as quantile regression and generalized linear models, while the second one is applied to high-dimensional non-convex learning problems, such as regressions via deep neural networks. Simulations and real data analysis are conducted to demonstrate the robustness of log-truncated estimations compared to standard estimations.

[abs]

[pdf][bib]