Bayesian meta-learning enables robust and fast adaptation to new tasks with uncertainty assessment. The key idea behind Bayesian meta-learning is empirical Bayes inference of hierarchical model. In this work, we extend this framework to include a variety of existing methods, before proposing our variant based on gradient-EM algorithm. Our method improves computational efficiency by avoiding back-propagation computation in the meta-update step, which is exhausting for deep neural networks. Furthermore, it provides flexibility to the inner-update optimization procedure by decoupling it from meta-update. Experiments on sinusoidal regression, few-shot image classification, and policy-based reinforcement learning show that our method not only achieves better accuracy with less computation cost, but is also more robust to uncertainty.
翻译:Bayesian 元学习使Bayesian 元学习能够强有力和快速地适应具有不确定性评估的新任务。 Bayesian 元学习背后的关键思想是经验性贝耶斯对等级模型的推论。 在这项工作中,我们在提出基于梯度-EM算法的变量之前,将这个框架扩大到包括各种现有方法。我们的方法通过在元更新步骤中避免反演算法提高了计算效率,而对于深层神经网络来说,这种计算法已经耗尽了。此外,它通过将其与元更新脱钩,为内部更新的优化程序提供了灵活性。 关于正统回归、微小图像分类和政策强化学习的实验表明,我们的方法不仅以较低的计算成本实现了更高的准确性,而且对不确定性也更加强大。