We study a fundamental question concerning adversarial noise models in statistical problems where the algorithm receives i.i.d. draws from a distribution $\mathcal{D}$. The definitions of these adversaries specify the type of allowable corruptions (noise model) as well as when these corruptions can be made (adaptivity); the latter differentiates between oblivious adversaries that can only corrupt the distribution $\mathcal{D}$ and adaptive adversaries that can have their corruptions depend on the specific sample $S$ that is drawn from $\mathcal{D}$. In this work, we investigate whether oblivious adversaries are effectively equivalent to adaptive adversaries, across all noise models studied in the literature. Specifically, can the behavior of an algorithm $\mathcal{A}$ in the presence of oblivious adversaries always be well-approximated by that of an algorithm $\mathcal{A}'$ in the presence of adaptive adversaries? Our first result shows that this is indeed the case for the broad class of statistical query algorithms, under all reasonable noise models. We then show that in the specific case of additive noise, this equivalence holds for all algorithms. Finally, we map out an approach towards proving this statement in its fullest generality, for all algorithms and under all reasonable noise models.
翻译:我们研究关于统计问题中的对抗性噪音模型的基本问题,因为算法从一个分布式的 $\ mathcal{D} 美元中提取。这些对手的定义具体指明了允许腐败的类型(噪音模式)以及何时可以进行这些腐败(适应性);后者区分了那些只能腐蚀分配的明显对手(mathcal{D}美元)和能够有其腐败的适应性对手之间的基本问题。我们的第一个结果显示,在所有合理的噪音模式下,对于广泛的统计查询算法来说,这确实是典型的样板。然后,我们表明,在所有研究的噪音模型中,算法的算法和算法中,所有最接近的比喻,最后,在最接近的模型中,我们用最接近的算法来证明,所有最接近的算法,我们用最接近的算法来证明,所有最接近的算法,我们用最接近的算法,最后,我们用最接近的算法来证明,所有最接近的算法。