【Richard S. Sutton】谈 The Bitter Lesson(AI 研究中痛苦的教训)

2022 年 2 月 24 日 深度强化学习实验室

深度强化学习实验室

官网 :http://www.neurondance.com/
论坛 http://deeprl.neurondance.com/
编辑:DeepRL


【中文版本】

Richard S. Sutton

March 13, 2019

从 70 年的 AI 研究中可以读出的最大教训是,利用计算的一般方法最终是最有效的,而且幅度很大。造成这种情况的最终原因是摩尔定律,或者更确切地说是它对每单位计算成本持续呈指数下降的概括。大多数 AI 研究已经进行,就好像智能体可用的计算是恒定的(在这种情况下,利用人类知识将是提高性能的唯一方法之一),但是,在比典型研究项目稍长的时间里,大量的计算量不可避免地变得可用。为了寻求在短期内产生影响的改进,研究人员试图利用他们对该领域的人类知识,但从长远来看,唯一重要的是利用计算。这两者不需要相互对立,但在实践中它们往往会发生冲突。花在一个上的时间是没有花在另一个上的时间。对一种方法或另一种方法的投资存在心理承诺。人类知识方法往往会使方法复杂化,使其不太适合利用利用计算的一般方法。有很多人工智能研究人员迟来的惨痛教训的例子,回顾一些最突出的例子是有启发性的。


在计算机国际象棋中,1997 年击败世界冠军卡斯帕罗夫的方法是基于大规模的深度搜索。当时,大多数计算机国际象棋研究人员对此感到沮丧,他们寻求利用人类对国际象棋特殊结构的理解的方法。当使用特殊硬件和软件的更简单、基于搜索的方法被证明更加有效时,这些基于人类知识的国际象棋研究人员并不是很好的失败者。他们说这次“蛮力”搜索可能赢了,但这不是一般的策略,无论如何也不是人们下棋的方式。这些研究人员希望基于人工输入的方法获胜,但当他们没有成功时,他们感到失望.


在计算机围棋中也看到了类似的研究进展模式,只是又推迟了 20 年。最初的巨大努力是通过利用人类知识或游戏的特殊功能来避免搜索,但所有这些努力都被证明是无关紧要的,或者更糟糕的是,一旦搜索被大规模有效地应用。同样重要的是使用自我学习来学习价值函数(就像在许多其他游戏甚至国际象棋中一样,尽管学习在 1997 年首次击败世界冠军的程序中没有发挥重要作用)。通过自我游戏学习,以及一般的学习,就像搜索一样,它可以实现大量计算。搜索和学习是在人工智能研究中利用大量计算的两种最重要的技术。在计算机围棋中,就像在计算机国际象棋中一样,研究人员最初的努力是针对利用人类的理解(因此需要更少的搜索),直到后来才通过拥抱搜索和学习取得了更大的成功。


在语音识别方面,早在 1970 年代就有一场由 DARPA 赞助的竞赛。参赛者包括许多利用人类知识的特殊方法——单词、音素、人类声道等知识。另一方面是更新的方法,这些方法在本质上更具统计性,计算量更大,基于隐马尔可夫模型(HMM)。同样,统计方法胜过基于人类知识的方法。这导致了所有自然语言处理的重大变化,在过去的几十年里逐渐发生了变化,统计和计算开始主导该领域。最近语音识别中深度学习的兴起是朝着这个一致方向迈出的最新一步。深度学习方法对人类知识的依赖更少,使用更多的计算,再加上在庞大的训练集上进行学习,以产生更好的语音识别系统。就像在游戏中一样,研究人员总是试图让系统按照研究人员认为他们自己的想法工作的方式工作——他们试图将这些知识放入他们的系统中——但最终证明它适得其反,并且极大地浪费了研究人员的时间时,通过摩尔定律,大规模计算变得可用,并找到了一种充分利用它的方法。


在计算机视觉中,也有类似的模式。早期的方法将视觉设想为搜索边缘,或广义圆柱,或根据 SIFT 特征。但是今天这一切都被抛弃了。现代深度学习神经网络仅使用卷积和某些类型的不变性的概念,并且性能要好得多。


这是一个很大的教训。作为一个领域,我们还没有彻底了解它,因为我们还在继续犯同样的错误。要看到这一点并有效地抵制它,我们必须了解这些错误的吸引力。我们必须离开


【英文原文版本】

Rich Sutton

March 13, 2019


The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. The ultimate reason for this is Moore's law, or rather its generalization of continued exponentially falling cost per unit of computation. Most AI research has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but, over a slightly longer time than a typical research project, massively more computation inevitably becomes available. Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation. These two need not run counter to each other, but in practice they tend to. Time spent on one is time not spent on the other. There are psychological commitments to investment in one approach or the other. And the human-knowledge approach tends to complicate methods in ways that make them less suited to taking advantage of general methods leveraging computation.  There were many examples of AI researchers' belated learning of this bitter lesson, and it is instructive to review some of the most prominent.

In computer chess, the methods that defeated the world champion, Kasparov, in 1997, were based on massive, deep search. At the time, this was looked upon with dismay by the majority of computer-chess researchers who had pursued methods that leveraged human understanding of the special structure of chess. When a simpler, search-based approach with special hardware and software proved vastly more effective, these human-knowledge-based chess researchers were not good losers. They said that ``brute force" search may have won this time, but it was not a general strategy, and anyway it was not how people played chess. These researchers wanted methods based on human input to win and were disappointed when they did not.

A similar pattern of research progress was seen in computer Go, only delayed by a further 20 years. Enormous initial efforts went into avoiding search by taking advantage of human knowledge, or of the special features of the game, but all those efforts proved irrelevant, or worse, once search was applied effectively at scale. Also important was the use of learning by self play to learn a value function (as it was in many other games and even in chess, although learning did not play a big role in the 1997 program that first beat a world champion). Learning by self play, and learning in general, is like search in that it enables massive computation to be brought to bear. Search and learning are the two most important classes of techniques for utilizing massive amounts of computation in AI research. In computer Go, as in computer chess, researchers' initial effort was directed towards utilizing human understanding (so that less search was needed) and only much later was much greater success had by embracing search and learning.

In speech recognition, there was an early competition, sponsored by DARPA, in the 1970s. Entrants included a host of special methods that took advantage of human knowledge---knowledge of words, of phonemes, of the human vocal tract, etc. On the other side were newer methods that were more statistical in nature and did much more computation, based on hidden Markov models (HMMs). Again, the statistical methods won out over the human-knowledge-based methods. This led to a major change in all of natural language processing, gradually over decades, where statistics and computation came to dominate the field. The recent rise of deep learning in speech recognition is the most recent step in this consistent direction. Deep learning methods rely even less on human knowledge, and use even more computation, together with learning on huge training sets, to produce dramatically better speech recognition systems. As in the games, researchers always tried to make systems that worked the way the researchers thought their own minds worked---they tried to put that knowledge in their systems---but it proved ultimately counterproductive, and a colossal waste of researcher's time, when, through Moore's law, massive computation became available and a means was found to put it to good use.

In computer vision, there has been a similar pattern. Early methods conceived of vision as searching for edges, or generalized cylinders, or in terms of SIFT features. But today all this is discarded. Modern deep-learning neural networks use only the notions of convolution and certain kinds of invariances, and perform much better.

This is a big lesson. As a field, we still have not thoroughly learned it, as we are continuing to make the same kind of mistakes. To see this, and to effectively resist it, we have to understand the appeal of these mistakes. We have to learn the bitter lesson that building in how we think we think does not work in the long run. The bitter lesson is based on the historical observations that 1) AI researchers have often tried to build knowledge into their agents, 2) this always helps in the short term, and is personally satisfying to the researcher, but 3) in the long run it plateaus and even inhibits further progress, and 4) breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness, and often incompletely digested, because it is success over a favored, human-centric approach.

One thing that should be learned from the bitter lesson is the great power of general purpose methods, of methods that continue to scale with increased computation even as the available computation becomes very great. The two methods that seem to scale arbitrarily in this way are search and learning.

The second general point to be learned from the bitter lesson is that the actual contents of minds are tremendously, irredeemably complex; we should stop trying to find simple ways to think about the contents of minds, such as simple ways to think about space, objects, multiple agents, or symmetries. All these are part of the arbitrary, intrinsically-complex, outside world. They are not what should be built in, as their complexity is endless; instead we should build in only the meta-methods that can find and capture this arbitrary complexity. Essential to these methods is that they can find good approximations, but the search for them should be by our methods, not by us. We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done.


登录查看更多
0

相关内容

通过学习、实践或探索所获得的认识、判断或技能。
持续学习最新综述论文,29页pdf
专知会员服务
117+阅读 · 2021年4月22日
专知会员服务
28+阅读 · 2020年12月14日
可解释强化学习,Explainable Reinforcement Learning: A Survey
专知会员服务
128+阅读 · 2020年5月14日
2019必读的十大深度强化学习论文
专知会员服务
57+阅读 · 2020年1月16日
玩桥牌,8位人类世界冠军,都输给了AI
大数据文摘
0+阅读 · 2022年4月1日
国家自然科学基金
0+阅读 · 2015年12月31日
国家自然科学基金
0+阅读 · 2014年12月31日
国家自然科学基金
0+阅读 · 2013年12月31日
国家自然科学基金
0+阅读 · 2012年12月31日
国家自然科学基金
0+阅读 · 2012年12月31日
国家自然科学基金
0+阅读 · 2012年12月31日
国家自然科学基金
0+阅读 · 2012年12月31日
国家自然科学基金
0+阅读 · 2009年12月31日
国家自然科学基金
0+阅读 · 2009年12月31日
Arxiv
1+阅读 · 2022年4月19日
Arxiv
0+阅读 · 2022年4月17日
VIP会员
相关基金
国家自然科学基金
0+阅读 · 2015年12月31日
国家自然科学基金
0+阅读 · 2014年12月31日
国家自然科学基金
0+阅读 · 2013年12月31日
国家自然科学基金
0+阅读 · 2012年12月31日
国家自然科学基金
0+阅读 · 2012年12月31日
国家自然科学基金
0+阅读 · 2012年12月31日
国家自然科学基金
0+阅读 · 2012年12月31日
国家自然科学基金
0+阅读 · 2009年12月31日
国家自然科学基金
0+阅读 · 2009年12月31日
Top
微信扫码咨询专知VIP会员