Few-shot supervised learning leverages experience from previous learning tasks to solve new tasks where only a few labelled examples are available. One successful line of approach to this problem is to use an encoder-decoder meta-learning pipeline, whereby labelled data in a task is encoded to produce task representation, and this representation is used to condition the decoder to make predictions on unlabelled data. We propose an approach that uses this pipeline with two important features. 1) We use infinite-dimensional functional representations of the task rather than fixed-dimensional representations. 2) We iteratively apply functional updates to the representation. We show that our approach can be interpreted as extending functional gradient descent, and delivers performance that is comparable to or outperforms previous state-of-the-art on few-shot classification benchmarks such as miniImageNet and tieredImageNet.
翻译:在仅有几个标签实例的情况下,通过少发监督监督的学习经验,解决以前学习任务中的新任务。这个问题的一个成功办法是使用编码器-编码器元学习管道,将一项任务中的标签数据编码成任务说明书,而这种表述被用来为编码器提供条件,以便对未标记数据作出预测。我们提出一种办法,利用这一管道有两个重要特点。 (1) 我们使用无孔不入的功能表示任务,而不是固定的表示方式。 (2) 我们反复地对表述方式进行功能更新。我们表明,我们的方法可以被解释为扩展功能梯度下降,并产生与微小图像网和分层图像网等少数分类基准相比或超过以往水平的性能。