A stylized feature of high-dimensional data is that many variables have heavy tails, and robust statistical inference is critical for valid large-scale statistical inference. Yet, the existing developments such as Winsorization, Huberization and median of means require the bounded second moments and involve variable-dependent tuning parameters, which hamper their fidelity in applications to large-scale problems. To liberate these constraints, this paper revisits the celebrated Hodges-Lehmann (HL) estimator for estimating location parameters in both the one- and two-sample problems, from a non-asymptotic perspective. Our study develops Berry-Esseen inequality and Cram\'{e}r type moderate deviation for the HL estimator based on newly developed non-asymptotic Bahadur representation, and builds data-driven confidence intervals via a weighted bootstrap approach. These results allow us to extend the HL estimator to large-scale studies and propose \emph{tuning-free} and \emph{moment-free} high-dimensional inference procedures for testing global null and for large-scale multiple testing with false discovery proportion control. It is convincingly shown that the resulting tuning-free and moment-free methods control false discovery proportion at a prescribed level. The simulation studies lend further support to our developed theory.
翻译:高维数据的一个典型特征是,许多变量的尾尾部很重,而强有力的统计推断对于有效的大规模统计推断至关重要。然而,现有的发展动态,如Winsorization、Huberiziz化和手段中位数,需要捆绑的第二个时刻,并涉及不同程度的调试参数,这妨碍了其在大规模问题应用中的忠诚度。为了消除这些限制,本文件重审了著名的Hodges-Lehmann(HL)估计器,以便从非被动角度估算一号问题和二号抽样问题中的位置参数。然而,我们的研究发展了Berry-Esseen不平等和Cram\{e}{er型HL估测算器的中度偏差,以新开发的不依赖性巴哈杜尔代表为基础,并且通过加权的靴套式方法建立数据驱动的信任间隔。这些结果使我们能够将HLS估测仪扩展为大规模研究,并提议在不依赖性理论的理论水平上进行无偏向的无偏向的模拟测试,从而进行无偏向性地进行无偏向式的无偏向比例的大规模测试。