The tutorial is written for those who would like an introduction to reinforcement learning (RL). The aim is to provide an intuitive presentation of the ideas rather than concentrate on the deeper mathematics underlying the topic. RL is generally used to solve the so-called Markov decision problem (MDP). In other words, the problem that you are attempting to solve with RL should be an MDP or its variant. The theory of RL relies on dynamic programming (DP) and artificial intelligence (AI). We will begin with a quick description of MDPs. We will discuss what we mean by “complex” and “large-scale” MDPs. Then we will explain why RL is needed to solve complex and large-scale MDPs. The semi-Markov decision problem (SMDP) will also be covered.

The tutorial is meant to serve as an introduction to these topics and is based mostly on the book: “Simulation-based optimization: Parametric Optimization techniques and reinforcement learning” [4]. The book discusses this topic in greater detail in the context of simulators. There are at least two other textbooks that I would recommend you to read: (i) Neuro-dynamic programming [2] (lots of details on convergence analysis) and (ii) Reinforcement Learning: An Introduction [11] (lots of details on underlying AI concepts). A more recent tutorial on this topic is [8]. This tutorial has 2 sections: • Section 2 discusses MDPs and SMDPs. • Section 3 discusses RL. By the end of this tutorial, you should be able to • Identify problem structures that can be set up as MDPs / SMDPs. • Use some RL algorithms.

成为VIP会员查看完整内容
0
74

相关内容

强化学习(RL)是机器学习的一个领域,与软件代理应如何在环境中采取行动以最大化累积奖励的概念有关。除了监督学习和非监督学习外,强化学习是三种基本的机器学习范式之一。 强化学习与监督学习的不同之处在于,不需要呈现带标签的输入/输出对,也不需要显式纠正次优动作。相反,重点是在探索(未知领域)和利用(当前知识)之间找到平衡。 该环境通常以马尔可夫决策过程(MDP)的形式陈述,因为针对这种情况的许多强化学习算法都使用动态编程技术。经典动态规划方法和强化学习算法之间的主要区别在于,后者不假设MDP的确切数学模型,并且针对无法采用精确方法的大型MDP。

知识荟萃

精品入门和进阶教程、论文和代码整理等

更多

查看相关VIP内容、论文、资讯等

强化一词来源于实验心理学中对动物学习的研究,它指的是某一事件的发生,与某一反应之间有恰当的关系,而这一事件往往会增加该反应在相同情况下再次发生的可能性。虽然心理学家没有使用“强化学习”这个术语,但它已经被人工智能和工程领域的理论家广泛采用,用来指代基于这一强化原理的学习任务和算法。最简单的强化学习方法使用的是一个常识,即如果一个行为之后出现了一个令人满意的状态,或者一个状态的改善,那么产生该行为的倾向就会得到加强。强化学习的概念在工程领域已经存在了几十年(如Mendel和McClaren 1970),在人工智能领域也已经存在了几十年(Minsky 1954, 1961;撒母耳1959;图灵1950)。然而,直到最近,强化学习方法的发展和应用才在这些领域占据了大量的研究人员。激发这种兴趣的是两个基本的挑战:1) 设计能够在复杂动态环境中在不确定性下运行的自主机器人代理,2) 为非常大规模的动态决策问题找到有用的近似解。

成为VIP会员查看完整内容
0
165

Meta-learning, or learning to learn, is the science of systematically observing how different machine learning approaches perform on a wide range of learning tasks, and then learning from this experience, or meta-data, to learn new tasks much faster than otherwise possible. Not only does this dramatically speed up and improve the design of machine learning pipelines or neural architectures, it also allows us to replace hand-engineered algorithms with novel approaches learned in a data-driven way. In this chapter, we provide an overview of the state of the art in this fascinating and continuously evolving field.

0
116
下载
预览

This paper presents the first two editions of Visual Doom AI Competition, held in 2016 and 2017. The challenge was to create bots that compete in a multi-player deathmatch in a first-person shooter (FPS) game, Doom. The bots had to make their decisions based solely on visual information, i.e., a raw screen buffer. To play well, the bots needed to understand their surroundings, navigate, explore, and handle the opponents at the same time. These aspects, together with the competitive multi-agent aspect of the game, make the competition a unique platform for evaluating the state of the art reinforcement learning algorithms. The paper discusses the rules, solutions, results, and statistics that give insight into the agents' behaviors. Best-performing agents are described in more detail. The results of the competition lead to the conclusion that, although reinforcement learning can produce capable Doom bots, they still are not yet able to successfully compete against humans in this game. The paper also revisits the ViZDoom environment, which is a flexible, easy to use, and efficient 3D platform for research for vision-based reinforcement learning, based on a well-recognized first-person perspective game Doom.

0
5
下载
预览

This manuscript surveys reinforcement learning from the perspective of optimization and control with a focus on continuous control applications. It surveys the general formulation, terminology, and typical experimental implementations of reinforcement learning and reviews competing solution paradigms. In order to compare the relative merits of various techniques, this survey presents a case study of the Linear Quadratic Regulator (LQR) with unknown dynamics, perhaps the simplest and best studied problem in optimal control. The manuscript describes how merging techniques from learning theory and control can provide non-asymptotic characterizations of LQR performance and shows that these characterizations tend to match experimental behavior. In turn, when revisiting more complex applications, many of the observed phenomena in LQR persist. In particular, theory and experiment demonstrate the role and importance of models and the cost of generality in reinforcement learning algorithms. This survey concludes with a discussion of some of the challenges in designing learning systems that safely and reliably interact with complex and uncertain environments and how tools from reinforcement learning and controls might be combined to approach these challenges.

0
5
下载
预览
小贴士
相关VIP内容
专知会员服务
165+阅读 · 2020年4月19日
【教程】自然语言处理中的迁移学习原理,41 页PPT
专知会员服务
64+阅读 · 2020年2月8日
专知会员服务
107+阅读 · 2020年2月1日
Keras François Chollet 《Deep Learning with Python 》, 386页pdf
专知会员服务
82+阅读 · 2019年10月12日
2019年机器学习框架回顾
专知会员服务
27+阅读 · 2019年10月11日
【新书】Python编程基础,669页pdf
专知会员服务
109+阅读 · 2019年10月10日
机器学习入门的经验与建议
专知会员服务
47+阅读 · 2019年10月10日
MIT新书《强化学习与最优控制》
专知会员服务
151+阅读 · 2019年10月9日
相关资讯
逆强化学习-学习人先验的动机
CreateAMind
6+阅读 · 2019年1月18日
强化学习的Unsupervised Meta-Learning
CreateAMind
7+阅读 · 2019年1月7日
spinningup.openai 强化学习资源完整
CreateAMind
4+阅读 · 2018年12月17日
OpenAI官方发布:强化学习中的关键论文
专知
10+阅读 · 2018年12月12日
【微软亚研130PPT教程】强化学习简介
专知
29+阅读 · 2018年10月26日
Python机器学习教程资料/代码
机器学习研究会
5+阅读 · 2018年2月22日
强化学习族谱
CreateAMind
11+阅读 · 2017年8月2日
强化学习 cartpole_a3c
CreateAMind
9+阅读 · 2017年7月21日
相关论文
gym-gazebo2, a toolkit for reinforcement learning using ROS 2 and Gazebo
Nestor Gonzalez Lopez,Yue Leire Erro Nuin,Elias Barba Moral,Lander Usategui San Juan,Alejandro Solano Rueda,Víctor Mayoral Vilches,Risto Kojcev
5+阅读 · 2019年3月14日
Claudio Gambella,Bissan Ghaddar,Joe Naoum-Sawaya
10+阅读 · 2019年1月16日
Self-Driving Cars: A Survey
Claudine Badue,Rânik Guidolini,Raphael Vivacqua Carneiro,Pedro Azevedo,Vinicius Brito Cardoso,Avelino Forechi,Luan Ferreira Reis Jesus,Rodrigo Ferreira Berriel,Thiago Meireles Paixão,Filipe Mutz,Thiago Oliveira-Santos,Alberto Ferreira De Souza
33+阅读 · 2019年1月14日
Risk-Aware Active Inverse Reinforcement Learning
Daniel S. Brown,Yuchen Cui,Scott Niekum
4+阅读 · 2019年1月8日
Borja Ibarz,Jan Leike,Tobias Pohlen,Geoffrey Irving,Shane Legg,Dario Amodei
4+阅读 · 2018年11月15日
Joaquin Vanschoren
116+阅读 · 2018年10月8日
ViZDoom Competitions: Playing Doom from Pixels
Marek Wydmuch,Michał Kempka,Wojciech Jaśkowski
5+阅读 · 2018年9月10日
Konstantinos Chatzilygeroudis,Vassilis Vassiliades,Freek Stulp,Sylvain Calinon,Jean-Baptiste Mouret
3+阅读 · 2018年7月6日
Benjamin Recht
5+阅读 · 2018年6月25日
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin
15+阅读 · 2017年12月6日
Top