We develop an exact coordinate descent algorithm for high-dimensional regularized Huber regression. In contrast to composite gradient descent methods, our algorithm fully exploits the advantages of coordinate descent when the underlying model is sparse. Moreover, unlike existing second-order approximation methods previously introduced in the literature, it remains effective even when the Hessian becomes ill-conditioned due to high correlations among covariates drawn from heavy-tailed distributions. The key idea is that, for each coordinate, marginal increments arise only from inlier observations, while the derivatives remain monotonically increasing over a grid constructed from the partial residuals. Building on conventional coordinate descent strategies, we further propose variable screening rules that selectively determine which variables to update at each iteration, thereby accelerating convergence. To the best of our knowledge, this is the first work to develop a first-order coordinate descent algorithm for penalized Huber loss minimization. We bound the nonasymptotic convergence rate of the proposed algorithm by extending arguments developed for the Lasso and formally characterize the operation of the proposed screening rule. Extensive simulation studies under heavy-tailed and highly-correlated predictors, together with a real data application, demonstrate both the practical efficiency of the method and the benefits of the computational enhancements.
翻译:暂无翻译