This paper develops and analyzes three families of estimators that continuously interpolate between classical quantiles and the sample mean. The construction begins with a smoothed version of the $L_{1}$ loss, indexed by a location parameter $z$ and a smoothing parameter $h \ge 0$, whose minimizer $\hat q(z,h)$ yields a unified M-estimation framework. Depending on how $(z, h)$ is specified, this framework generates three distinct classes of estimators: fixed-parameter smoothed quantile estimators, plug-in estimators of fixed quantiles, and a new continuum of mean-estimating procedures. For all three families we establish consistency and asymptotic normality via a uniform asymptotic equicontinuity argument. The limiting variances admit closed forms, allowing a transparent comparison of efficiency across families and smoothing levels. A geometric decomposition of the parameter space shows that, for fixed quantile level $τ$, admissible pairs $(z, h)$ lie on straight lines along which the estimator targets the same population quantile while its asymptotic variance evolves. The theoretical analysis reveals two efficiency regimes. Under light-tailed distributions (e.g., Gaussian), smoothing yields a monotone variance reduction. Under heavy-tailed distributions (e.g., Laplace), a finite smoothing parameter $h^{*}(τ) > 0$ strictly improves efficiency for quantile estimation. Numerical experiments -- based on simulated data and real financial returns -- validate these conclusions and show that, both asymptotically and in finite samples, the mean-estimating family does not improve upon the sample mean.
翻译:本文提出并分析了三个连续插值于经典分位数与样本均值之间的估计量族。该构建始于$L_{1}$损失的平滑版本,其由位置参数$z$和平滑参数$h \ge 0$索引,其最小化量$\hat q(z,h)$产生了一个统一的M-估计框架。根据$(z, h)$的设定方式,该框架生成三个不同的估计量类:固定参数平滑分位数估计量、固定分位数的插件估计量,以及一个新的均值估计过程连续统。对于所有三个族,我们通过一致渐近等连续性论证建立了相合性与渐近正态性。极限方差具有闭式形式,使得跨族与平滑水平的效率比较更为透明。参数空间的几何分解表明,对于固定分位数水平$τ$,可容许对$(z, h)$位于直线上,沿这些直线估计量针对相同的总体分位数,而其渐近方差随之演变。理论分析揭示了两种效率机制。在轻尾分布(如高斯分布)下,平滑导致方差单调减小。在重尾分布(如拉普拉斯分布)下,有限平滑参数$h^{*}(τ) > 0$严格提高了分位数估计的效率。基于模拟数据和真实金融收益的数值实验验证了这些结论,并表明无论是渐近地还是在有限样本中,均值估计族均未优于样本均值。