Bayes estimators are well known to provide a means to incorporate prior knowledge that can be expressed in terms of a single prior distribution. However, when this knowledge is too vague to express with a single prior, an alternative approach is needed. Gamma-minimax estimators provide such an approach. These estimators minimize the worst-case Bayes risk over a set $\Gamma$ of prior distributions that are compatible with the available knowledge. Traditionally, Gamma-minimaxity is defined for parametric models. In this work, we define Gamma-minimax estimators for general models and propose adversarial meta-learning algorithms to compute them when the set of prior distributions is constrained by generalized moments. Accompanying convergence guarantees are also provided. We also introduce a neural network class that provides a rich, but finite-dimensional, class of estimators from which a Gamma-minimax estimator can be selected. We illustrate our method in two settings, namely entropy estimation and a prediction problem that arises in biodiversity studies.
翻译:众所周知,贝亚测算器可以提供一种手段,纳入先前的知识,这种知识可以用先前的单一分布法来表示。然而,如果这种知识过于模糊,无法用先前的单一分布法来表达,则需要一种替代方法。伽马-最小估计器提供了这样一种方法。这些测算器将最坏的贝亚风险降到与现有知识相容的一组先前分布法的基数$\Gamma美元上。传统上,对参数模型界定了伽马-最小度。在这项工作中,我们界定了一般模型的伽马-最小度估计法,并提出了对抗的元学习算法,以便在先前的分布法因普遍时刻而受到限制时进行计算。还提供合并保证。我们还引入一个神经网络类,提供丰富但有限度的测算器,从中可以选择伽马-最小度测算器。我们用两种环境来说明我们的方法,即对生物多样性研究中出现的模型估计和预测问题。