Meta-learning methods aim to build learning algorithms capable of quickly adapting to new tasks in low-data regime. One of the main benchmarks of such an algorithms is a few-shot learning problem. In this paper we investigate the modification of standard meta-learning pipeline that takes a multi-task approach during training. The proposed method simultaneously utilizes information from several meta-training tasks in a common loss function. The impact of each of these tasks in the loss function is controlled by the corresponding weight. Proper optimization of these weights can have a big influence on training of the entire model and might improve the quality on test time tasks. In this work we propose and investigate the use of methods from the family of simultaneous perturbation stochastic approximation (SPSA) approaches for meta-train tasks weights optimization. We have also compared the proposed algorithms with gradient-based methods and found that stochastic approximation demonstrates the largest quality boost in test time. Proposed multi-task modification can be applied to almost all methods that use meta-learning pipeline. In this paper we study applications of this modification on Prototypical Networks and Model-Agnostic Meta-Learning algorithms on CIFAR-FS, FC100, tieredImageNet and miniImageNet few-shot learning benchmarks. During these experiments, multi-task modification has demonstrated improvement over original methods. The proposed SPSA-Tracking algorithm shows the largest accuracy boost. Our code is available online.
翻译:元数据学习方法旨在建立能够迅速适应低数据制度中新任务的学习算法。这种算法的主要基准之一是一个微小的学习问题。在本文中,我们调查了在培训期间采用多任务方法修改标准元学习管道的情况。拟议方法同时利用了共同损失函数中若干元培训任务的信息。这些任务在损失函数中的影响由相应的份量控制。适当优化这些重量可以对整个模型的培训产生很大影响,并可能提高测试时间任务的质量。在这项工作中,我们提议并调查了同时穿刺式随机近似(SPSA)方法在培训期间采用多任务加权法的方法。我们还将拟议的算法与基于梯度的方法进行了比较,并发现在测试时间里,这些随机近似近似式的近似效果表明了最大的质量提升。拟议的多任务修改可以适用于几乎所有使用元学习管道的方法。在这个文件中,我们研究了对Protocrical 网络和模型-Agnot-A) 精准性近似近似近似方法的使用方法。我们研究了对模型-SAR-Mexal-alal-alal-altradingsal-al-algational-altravelakingsal-al-al-al-I.我们对原始网络网络和模型的修改办法的应用,这些在FSAL-alview-al-al-al-al-al-al-al-I AS-al-al-I 学习了这些在FAS-al-I 期间,在FAS-tragal-al-al-al-al-al-al-al-al-I 学习了这些模拟方法中,在FS-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-I-I-I的模拟法的模拟法的模拟法学方法中,在FSal-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-al-SAx-al-al-al-al-al-al-al-I