Confidence Intervals and Hypothesis Testing for High-dimensional Quantile Regression: Convolution Smoothing and Debiasing
Yibo Yan, Xiaozhou Wang, Riquan Zhang; 24(245):1−49, 2023.
Abstract
Quantile regression with an $\ell_1$ penalty ($\ell_1$-QR) is a valuable technique for modeling the relationship between input and output variables in the presence of heterogeneous effects in high-dimensional settings. Hypothesis testing can be performed using the debiased $\ell_1$-QR estimator, which reduces the bias introduced by the Lasso penalty. However, the non-smoothness of the quantile loss poses significant computational challenges, particularly when dealing with high-dimensional data. Recently, a convolution-type smoothed quantile regression (SQR) model has been proposed to overcome this limitation, and researchers have developed estimation and variable selection theories for this model. In this study, we combine the debiased method with the SQR model to propose the debiased $\ell_1$-SQR estimator. Based on this estimator, we establish confidence intervals and hypothesis testing procedures in high-dimensional settings. Theoretically, we provide the non-asymptotic Bahadur representation for our proposed estimator as well as the Berry-Esseen bound, which indicates the empirical coverage rates for the studentized confidence intervals. Additionally, we develop the theory of hypothesis testing for both a single variable and a group of variables. Finally, we present extensive numerical experiments on both simulated and real data to demonstrate the effectiveness of our method.
[abs]