Few-shot recognition learns a recognition model with very few (e.g., 1 or 5) images per category, and current few-shot learning methods focus on improving the average accuracy over many episodes. We argue that in real-world applications we may often only try one episode instead of many, and hence maximizing the worst-case accuracy is more important than maximizing the average accuracy. We empirically show that a high average accuracy not necessarily means a high worst-case accuracy. Since this objective is not accessible, we propose to reduce the standard deviation and increase the average accuracy simultaneously. In turn, we devise two strategies from the bias-variance tradeoff perspective to implicitly reach this goal: a simple yet effective stability regularization (SR) loss together with model ensemble to reduce variance during fine-tuning, and an adaptability calibration mechanism to reduce the bias. Extensive experiments on benchmark datasets demonstrate the effectiveness of the proposed strategies, which outperforms current state-of-the-art methods with a significant margin in terms of not only average, but also worst-case accuracy. Our code is available at https://github.com/heekhero/ACSR.
翻译:微小的承认可以学习一种识别模式,每个类别只有很少的图像(如1或5),而目前的微小的学习方法侧重于提高许多片段的平均准确性。我们争辩说,在现实世界的应用中,我们往往只尝试一个事件而不是很多事件,从而最大限度地实现最坏情况的准确性比尽量提高平均准确性更重要。我们从经验上表明,高平均准确性并不一定意味着最坏情况的准确性。由于这一目标无法实现,我们提议减少标准偏差,同时提高平均准确性。我们从偏差取价的角度设计了两种战略,以隐含地达到这一目标:简单而有效的稳定规范化(SR)损失,加上模型的共性,以减少微调过程中的差异,以及适应性校准机制来减少偏差。关于基准数据集的广泛实验表明拟议战略的效力,这些战略不仅在平均、而且在最坏的情况下,比目前先进的方法大幅度。我们的代码可以在 https://github.com/hekhero/SR.