When the distributions of the training and test data do not coincide, the problem of understanding generalization becomes considerably more complex, prompting a variety of questions. In this work, we focus on a fundamental one: Is it always optimal for the training distribution to be identical to the test distribution? Surprisingly, assuming the existence of one-way functions, we find that the answer is no. That is, matching distributions is not always the best scenario, which contrasts with the behavior of most learning methods. Nonetheless, we also show that when certain regularities are imposed on the target functions, the standard conclusion is recovered in the case of the uniform distribution.
翻译:当训练数据与测试数据的分布不一致时,理解泛化问题变得更为复杂,并引发了一系列疑问。本文聚焦于一个基础性问题:训练分布是否始终与测试分布相同才是最优选择?令人惊讶的是,在假设单向函数存在的前提下,我们发现答案是否定的。也就是说,匹配分布并非总是最佳情形,这与大多数学习方法的表现形成对比。然而,我们也证明当对目标函数施加特定正则性约束时,在均匀分布情形下可恢复标准结论。