We consider the use of deep learning for parameter estimation. We propose Bias Constrained Estimators (BCE) that add a squared bias term to the standard mean squared error (MSE) loss. The main motivation to BCE is learning to estimate deterministic unknown parameters with no Bayesian prior. Unlike standard learning based estimators that are optimal on average, we prove that BCEs converge to Minimum Variance Unbiased Estimators (MVUEs). We derive closed form solutions to linear BCEs. These provide a flexible bridge between linear regrssion and the least squares method. In non-linear settings, we demonstrate that BCEs perform similarly to MVUEs even when the latter are computationally intractable. A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance. Examples include distributed sensor networks and data augmentation in test-time. In such applications, unbiasedness is a necessary condition for asymptotic consistency.
翻译:我们考虑在参数估计中使用深度学习。 我们提议 Bias Constrateed imisators (BCE), 在标准平均平方差(MSE)损失中添加一个正方形偏差术语。 BCE 的主要动机是学习估算确定性未知参数,而没有巴伊西亚之前没有Bayesian 。 与平均最佳的标准基于学习的估计值不同, 我们证明 BCEs 与最小差异无偏差估计值(MVUEs) 相融合。 我们为线性 BCEs 提供了封闭式的解决方案。 这些解决方案为线性线性重感和最小方形方法提供了灵活的桥梁。 在非线性设置中, 我们证明 BCEE 的功能与 MVUEs 类似, 即使后者在计算上非常难。 BCEE 的第二个动机是在应用中,对相同未知值的多重估计值平均用于改进性能。 例如测试时分布式的传感器网络和数据增强。 在这种应用中, 公正性是无症状一致性的必要条件。