We study fast algorithms for statistical regression problems under the strong contamination model, where the goal is to approximately optimize a generalized linear model (GLM) given adversarially corrupted samples. Prior works in this line of research were based on the robust gradient descent framework of Prasad et. al., a first-order method using biased gradient queries, or the Sever framework of Diakonikolas et. al., an iterative outlier-removal method calling a stationary point finder. We present nearly-linear time algorithms for robust regression problems with improved runtime or estimation guarantees compared to the state-of-the-art. For the general case of smooth GLMs (e.g. logistic regression), we show that the robust gradient descent framework of Prasad et. al. can be accelerated, and show our algorithm extends to optimizing the Moreau envelopes of Lipschitz GLMs (e.g. support vector machines), answering several open questions in the literature. For the well-studied case of robust linear regression, we present an alternative approach obtaining improved estimation rates over prior nearly-linear time algorithms. Interestingly, our method starts with an identifiability proof introduced in the context of the sum-of-squares algorithm of Bakshi and Prasad, which achieved optimal error rates while requiring large polynomial runtime and sample complexity. We reinterpret their proof within the Sever framework and obtain a dramatically faster and more sample-efficient algorithm under fewer distributional assumptions.
翻译:在强强污染模型下,我们研究统计回归问题的快速算法,其目标是根据对抗性腐蚀的样本,使普遍线性模型(GLM)达到最优化;在这一研究线上,以前的工作是以普拉萨德等人(Pasad等人)的稳健的梯度下降框架为基础,这是使用偏差梯度查询的首选方法,或Diakonikolas等人(Diakonikolas等人)的Sever框架(Diakkonikolas等人(Diakkonikoliset al.)的迭代外部清除方法,称为固定点发现器),用于研究统计回归问题的快速算法。我们为稳健的回归问题提出了近线性时间或估计保证的近于最新水平的近线性。关于平稳的GLMMs(如后勤回归)的一般案例,我们表明,可加速普拉萨德等人(Lasaddasad et al. al.)的稳性梯度下降的回归框架,并显示我们的算法延伸至优化的莫罗夫(e-Lipschitz GLMM(ebormationrass) rassimassimisservialislislislvilislislislislislisl)框架(e),我们采用了比的大幅的快速和最大检验框架。