多任务学习(MTL)是机器学习的一个子领域,可以同时解决多个学习任务,同时利用各个任务之间的共性和差异。与单独训练模型相比,这可以提高特定任务模型的学习效率和预测准确性。多任务学习是归纳传递的一种方法,它通过将相关任务的训练信号中包含的域信息用作归纳偏差来提高泛化能力。通过使用共享表示形式并行学习任务来实现,每个任务所学的知识可以帮助更好地学习其它任务。

VIP内容

【导读】本文为大家带来了一份斯坦福大学的最新课程CS330——深度多任务和元学习,主讲人是斯坦福大学Chelsea Finn,她是斯坦福大学计算机科学与电气工程系的助理教授,元学习大牛。

她的博士论文——基于梯度的元学习(Learning to Learn with Gradients)很值得一读,该论文系统性地阐述了Meta Learning以及她提出的MAML的方法和相关改进。作者从Meta Learning问题出发,然后提出了MAML理论,再进行一系列基于该理论的应用尝试。

尽管深度学习在图像分类、语音识别和游戏等有监督和强化学习问题上取得了显著的成功,但这些模型在很大程度上是专门用于训练它们的单一任务的。本课程将涵盖需要解决多个任务的环境,并研究如何利用多个任务产生的结构来更有效地学习。

介绍

尽管深度学习在图像分类、语音识别和游戏等有监督和强化学习问题上取得了显著的成功,但这些模型在很大程度上是专门用于训练它们的单一任务的。本课程将涵盖需要解决多个任务的环境,并研究如何利用多个任务产生的结构来更有效地学习。

**这包括: ** 以目标为条件的强化学习技术,它利用所提供的目标空间的结构来快速地学习多个任务; 元学习方法旨在学习可以快速学习新任务的高效学习算法; 课程和终身学习,其中问题需要学习一系列任务,并利用它们的共享结构来实现知识转移。

这是一门研究生水平的课程。在课程结束时,学生将能够理解和实施最先进的多任务学习和元学习算法,并准备对这些主题进行研究。

课程链接: https://cs330.stanford.edu/

课程安排

课程安排

01: 课程介绍,问题定义,应用(Course introduction, problem definitions, applications) 02:有监督的多任务学习,黑盒元学习(Supervised multi-task learning, black-box meta-learning) 03:TensorFlow教程(TensorFlow tutorial) 04:基于优化的元学习(Optimization-based meta-learning) 05:通过度量学习进行少量学习(Few-shot learning via metric learning) 06:贝叶斯元学习(Bayesian meta-learning) 07:强化学习入门,多任务RL,目标条件RL(Renforcement learning primer, multi-task RL, goal-conditioned RL) 08:Meta-RL,学习探索(Meta-RL, learning to explore) 09:用于多任务学习的基于模型的RL,基于元模型的RL(Model-based RL for multi-task learning, meta model-based RL) 10:终身学习:问题陈述,前后迁移(Lifelong learning: problem statement, forward & backward transfer) 11:前沿: 记忆,无监督元学习,开放性问题(Frontiers: Memorization, unsupervised meta-learning, open problems)

成为VIP会员查看完整内容
0
26

最新论文

Genomics is the foundation of precision medicine, global food security and virus surveillance. Exact-match is one of the most essential operations widely used in almost every step of genomics such as alignment, assembly, annotation, and compression. Modern genomics adopts Ferragina-Manzini Index (FM-Index) augmenting space-efficient Burrows-Wheeler transform (BWT) with additional data structures to permit ultra-fast exact-match operations. However, FM-Index is notorious for its poor spatial locality and random memory access pattern. Prior works create GPU-, FPGA-, ASIC- and even process-in-memory (PIM)-based accelerators to boost FM-Index search throughput. Though they achieve the state-of-the-art FM-Index search throughput, the same as all prior conventional accelerators, FM-Index PIMs process only one DNA symbol after each DRAM row activation, thereby suffering from poor memory bandwidth utilization. In this paper, we propose a hardware accelerator, EXMA, to enhance FM-Index search throughput. We first create a novel EXMA table with a multi-task-learning (MTL)-based index to process multiple DNA symbols with each DRAM row activation. We then build an accelerator to search over an EXMA table. We propose 2-stage scheduling to increase the cache hit rate of our accelerator. We introduce dynamic page policy to improve the row buffer hit rate of DRAM main memory. We also present CHAIN compression to reduce the data structure size of EXMA tables. Compared to state-of-the-art FM-Index PIMs, EXMA improves search throughput by $4.9\times$, and enhances search throughput per Watt by $4.8\times$.

0
0
下载
预览
父主题
Top