ICML2019《元学习》教程与必读论文列表

2019 年 6 月 16 日 专知

【导读】伯克利Chelsea Finn和Sergey Levine在ICML2019上进行了关于元学习的教程报告,111页ppt,非常值得学习。


请关注专知公众号(点击上方蓝色专知关注

  • 后台回复“元学习2019” 就可以获取《元学习学习》111页ppt和必读论文列表载链接~


https://sites.google.com/view/icml19metalearning

近年来,像深度神经网络这样的高容量模型,使非常强大的机器学习技术在数据丰富的领域成为可能。然而,数据稀缺的领域对这类方法具有挑战性,因为高容量函数逼近器非常依赖大型数据集进行泛化。这可能对从监督医学图像处理到增强学习等领域构成重大挑战,在这些领域中,真实世界的数据收集(例如机器人)构成了重大的后勤挑战。元学习或小样本学习为这一问题提供了一个潜在的解决方案:通过学习跨许多以前任务的数据学习,小样本元学习算法可以发现任务之间的结构,从而使新任务的快速学习成为可能。


本教程的目的是提供一个统一的元学习视角:向读者讲授现代方法,描述围绕这些技术的概念和理论原则,介绍这些方法以前在哪里被应用,并讨论该领域内的基本开放问题和挑战。我们希望本教程对其他领域的机器学习研究人员有用,同时也为元学习研究人员提供了一个新的视角。总而言之,我们的目标是让观众能够将元学习应用到他们自己的应用中,并开发新的元学习算法和理论分析,以应对当前的挑战和现有工作的局限性。


我们将提供一个统一的视角,说明各种元学习算法如何支持从小数据集中学习,概述元学习可以和不可以轻松应用的应用程序,并讨论这个子领域的突出挑战和前沿。


视频地址

Part 1: https://www.facebook.com/icml.imls/videos/400619163874853/

Part 2: https://www.facebook.com/icml.imls/videos/2970931166257998/


元学习必读论文列表

Chelsea Finn, Sergey Levine

StanfordUniversity, Google Brain, UC Berkeley

 

ICML 2019Tutorial on Meta-Learning: from Few-Shot Learning to Fast Adaptation

https://sites.google.com/view/icml19metalearning


Black-Box Adaptation Approaches

** Santoro, Bartunov, Botvinick, Wierstra,Lillicrap. One-shot Learning withMemory-Augmented Neural Networks. 2016

Hochreiter, Younger, Conwell. Learning to Learn using Gradient Descent. 2001

Munkhdalai & Yu. Meta Networks. 2017

Ha, Dai, Li. HyperNetworks. 2017

Mishra, Rohaninejad, Chen, Abbeel. A Simple Neural Attentive Meta-Learner. 2018


Optimization-Based Meta-Learners

** Finn, Abbeel, Levine. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. 2017

Finn. Learningto Learn with Gradients. PhD thesis, 2019


Different Inner Optimizations

Harrison, Sharma, Pavone. Meta-Learning Priors for Efficient Online Bayesian Regression. 2018

Nichol, Achiam, Schulman. On First-Order Meta-Learning Algorithms. 2018

Bertinetto, Henriques, Torr, Vedaldi. Meta-learning with differentiableclosed-form solvers. 2018

* Lee, Maji, Ravichandran. Meta-Learning with Differentiable ConvexOptimization. 2019


Various Improvements

Behl, Baydin, Torr. Alpha MAML: Adaptive Model-Agnostic Meta-Learning. 2019

Kim, Lee, Kim, Cha, Lee, Choi, Choi, Cho, Kim.Auto-Meta: Automated Gradient Based MetaLearner Search. 2018

Antoniou, Edwards, Storkey. How to train your MAML. 2019

Zintgraf, Shiarlis, Kurin, Hofmann, Whiteson. Fast Context Adaptation via Meta-Learning. 2019


Non-Parametric Meta-Learners

Koch, Zemel, Salakhutinov. Siamese Neural Networks for One-shot ImageRecognition. 2015

** Vinyals, Blundell, Lillicrap, Wiestra. Matching Networks for One-Shot Learning.2016

* Snell, Swersky, Zemel. Prototypical Networks for Few-Shot Learning. 2017

Kaiser, Nachum, Roy, Bengio. Learning to remember rare events. 2017

Sung, Yang, Zhang, Xiang, Torr, Hospedales. Learning to compare: Relation network forfew-shot learning. 2018

Allen, Shelhamer, Shin, Tenenbaum. Infinite Mixture Prototypes for Few-ShotLearning. 2019

Garcia, Bruna. Few-Shot Learning with Graph Neural Networks. 2018


Hybrid Approaches

* Ravi & Larochelle. Optimization as a Model for Few-Shot Learning. 2017

Zintgraf, Shiarlis, Kurin, Hofmann, Whiteson. CAML: Fast Context Adaptation viaMeta-Learning. 2018

Rusu, Rao, Sygnowski, Vinyals, Pascanu,Osindero, Hadsell. Meta-Learning withLatent Embedding Optimization. 2018

Triantafillou, Zhu, Dumoulin, Lamblin, Xu,Goroshin, Gelada, Swersky, Manzagol, Larochelle. Meta-Dataset: A Dataset of Datasets for Learning to Learn from FewExamples.  2019

Bayesian Meta-Learners

Non-Deep Learning Approaches

Fei-Fei, Fergus, Perona. One-shot learning of object categories. 2006

Lake, Salakhutdinov, Gross, Tenenbaum. One shot learning of simple visualconcepts. 2011

Salakhutdinov, Tenenbaum, Torralba. One-shot learning with a hierarchicalnonparametric bayesian model. 2012

Lake, Salakhutdinov, Tenenbaum. One-shot learning by inverting acompositional causal process. 2013

** Lake, Salakhutdinov, Tenenbaum. Human-level concept learning throughprobabilistic program induction. 2015


Modern Deep Learning Approaches

* Edwards, Storkey. Towards a Neural Statistician. 2017

* Gordon, Bronskill, Bauer, Nowozin, Turner. Meta-Learning Probabilistic Inference forPrediction. 2019

* Finn*, Xu*, Levine. Probabilistic Model-Agnostic Meta-Learning. 2018

Grant, Finn, Levine, Darrell, Griffiths. Recasting gradient-based meta-learning ashierarchical Bayes. 2018

Garnelo et al. Conditional Neural Processes. 2018

Ravi & Beatson Amortized Bayesian Meta-Learning. 2018

Kim et al. Bayesian Model-Agnostic Meta-Learning. 2018


Black-Box Meta-ReinforcementLearning

** Wang, Kurth-Nelson, Tirumala, Soyer, Leibo,Munos, Blundell, Kumaran, Botvinick. Learningto Reinforcement Learning. 2016

** Duan, Schulman, Chen, Bartlett, Sutskever,Abbeel. RL2: Fast Reinforcement Learningvia Slow Reinforcement Learning. 2016

Heess, Hunt, Lillicrap, Silver. Memory-based control with recurrent neuralnetworks. 2015

Mishra, Rohaninejad, Chen, Abbeel. A Simple Neural Attentive Meta-Learner. 2017


Gradient-BasedMeta-Reinforcement Learning

** Finn, Abbeel, Levine. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. 2017

Foerster, Farquhar, Al-Shedivat, Rocktaschel,Xing, Whiteson. DiCE: The InfinitelyDifferentiable Monte Carlo Estimator. 2018

Rothfuss, Lee, Clavera, Asfour, Abbeel. ProMP:Proximal Meta-Policy Search. 2018

Mendonca, Gupta, Kralev, Abbeel, Levine, Finn.Guided Meta-Policy Search. 2019


Probabilistic/POMDP-BasedMeta-RL

** Rakelly*, Zhou*, Quillen, Finn, Levine. Efficient Off-Policy Meta-Reinforcementlearning via Probabilistic Context Variables. 2019

Zintgraf, Igl, Shiarlis, Mahajan, Hofmann,Whiteson. Variational Task Embeddingsfor Fast Adaptation in Deep Reinforcement Learning. 2019

Humplik, Galashov, Hasenclever, Ortega, Teh,Heess. Meta reinforcement learning astask inference. 2019


Learning to Explore

** Gupta, Mendonca, Liu, Abbeel, Levine. Meta-Reinforcement Learning of StructuredExploration Strategies. 2018

Stadie*, Yang*, Houthooft, Chen, Duan, Wu,Abbeel, Sutskever. Some Considerationson Learning to Explore via Meta-Reinforcement Learning. 2018


Evolutionary Algorithms &Meta-Learning

Houthooft, Chen, Isola, Stadie, Wolski, Ho,Abbeel. Evolved Policy Gradients. 2018

Fernando, Sygnowski, Osindero, Wang, Schaul, Teplyashin,Sprechmann, Pirtzel, Rusu. Meta-Learningby the Baldwin Effect. 2018

Hüsken, Gayko, Sendhoff. Optimization for problem classes – Neural networks that learn tolearn. 2000

Hüsken, Goerick. Fast learning for problem classes using knowledge based networkinitialization. 2000


Model-Based Meta-ReinforcementLearning

** Nagabandi*, Clavera*, Liu, Fearing, Abbeel,Levine, Finn. Learning to Adapt inDynamic, Real-World Environments Through Meta-Reinforcement Learning. 2018

Saemundsson, Hofmann, Deisenroth. Meta-Reinforcement Learning with LatentVariable Gaussian Processes. 2018

Nagabandi, Finn, Levine. Deep Online Learning via Meta- Learning: Continual Adaptation forModel-Based RL. 2018


Meta-RL as Model of EmergentPhenomena in Brain

* Wang, Kurth-Nelson, Kumaran, Tirumala,Soyer, Leibo, Hassabis, Botvinick. PrefrontalCortex as a Meta-Reinforcement Learning System. 2018

Ritter, Wang, Kurth-Nelson, Jayakumar,Blundell, Pascanu, Botvinick. BeenThere, Done That: Meta-Learning with Episodic Recall. 2018

Dasgupta, Wang, Chiappa, Mitrovic, Ortega,Raposo, Hughes, Battaglia, Botvinick, Kurth-Nelson. Causal Reasoning from Meta-Reinforcement Learning. 2019


Unsupervised Meta-Learning

** Hsu, Levine, Finn. Unsupervised Learning via Meta-Learning. ICLR 2019

* Gupta, Eysenbach, Finn, Levine. Unsupervised Meta-Learning forReinforcement Learning. 2018

Khodadadeh, Boloni, Shah. Unsupervised Meta-Learning for Few-Shot Image and Video Classification.2019

Antoniou & Storkey. Assume, Augment and Learn: Unsupervised Few-Shot Meta-Learning viaRandom Labels and Data Augmentation. 2019


Learning Unsupervised andSemi-Supervised Learning

** Metz, Maheswaranathan, Cheung,Sohl-Dickstein. Meta-Learning UpdateRules for

UnsupervisedRepresentation Learning. 2018

Ren, Triantafillou, Ravi, Snell, Swersky,Tenenbaum, Larochelle, Zemel. Meta-

Learningfor Semi-Supervised Few-Shot Classification. 2019


Lifelong and OnlineMeta-Learning

** Finn*, Rajeswaran*, Kakade, Levine. Online Meta-Learning. 2019

Nagabandi, Finn, Levine. Deep Online Learning via Meta- Learning: Continual Adaptation forModel-Based RL. 2018

Jerfel, Grant, Griffiths, Heller. Online gradient-based mixtures fortransfer modulation in meta-learning. 2018




-END-

专 · 知

专知,专业可信的人工智能知识分发,让认知协作更快更好!欢迎登录www.zhuanzhi.ai,注册登录专知,获取更多AI知识资料!

欢迎微信扫一扫加入专知人工智能知识星球群,获取最新AI专业干货知识教程视频资料和与专家交流咨询

请加专知小助手微信(扫一扫如下二维码添加),加入专知人工智能主题群~

专知《深度学习:算法到实战》课程全部完成!550+位同学在学习,现在报名,限时优惠!网易云课堂人工智能畅销榜首位!

点击“阅读原文”,了解报名专知《深度学习:算法到实战》课程

登录查看更多
41

相关内容

Meta Learning,元学习,也叫 Learning to Learn(学会学习)。是继Reinforcement Learning(增强学习)之后又一个重要的研究分支。

知识荟萃

精品入门和进阶教程、论文和代码整理等

更多

查看相关VIP内容、论文、资讯等
元学习(meta learning) 最新进展综述论文
专知会员服务
275+阅读 · 2020年5月8日
深度强化学习策略梯度教程,53页ppt
专知会员服务
175+阅读 · 2020年2月1日
【资源】元学习论文分类列表推荐
专知
19+阅读 · 2019年12月3日
元学习(Meta Learning)最全论文、视频、书籍资源整理
深度学习与NLP
22+阅读 · 2019年6月20日
清华大学孙茂松组:图神经网络必读论文列表
机器之心
45+阅读 · 2018年12月27日
A Comprehensive Survey on Transfer Learning
Arxiv
117+阅读 · 2019年11月7日
Arxiv
9+阅读 · 2019年4月19日
Learning Embedding Adaptation for Few-Shot Learning
Arxiv
16+阅读 · 2018年12月10日
Arxiv
135+阅读 · 2018年10月8日
Arxiv
7+阅读 · 2018年6月8日
VIP会员
相关论文
A Comprehensive Survey on Transfer Learning
Arxiv
117+阅读 · 2019年11月7日
Arxiv
9+阅读 · 2019年4月19日
Learning Embedding Adaptation for Few-Shot Learning
Arxiv
16+阅读 · 2018年12月10日
Arxiv
135+阅读 · 2018年10月8日
Arxiv
7+阅读 · 2018年6月8日
Top
微信扫码咨询专知VIP会员