The Tsetlin Machine – 一种博弈论驱动的使用命题逻辑的最优模式识别方法

2019 年 10 月 12 日 CreateAMind


The Tsetlin Machine – A Game Theoretic Bandit Driven Approach to Optimal Pattern Recognition with

Propositional LogicOle-Christoffer Granmo


Abstract

Although simple individually, artificial neurons provide state-of-the-art performance when interconnected in deep networks. Unknown to many, there exists an arguably even simpler and more versatile learning mechanism, namely, the Tsetlin Automaton. Merely by means of a single integer as memory, it learns the optimal action in stochastic environ- ments through increment and decrement operations. In this paper, we introduce the Tsetlin Machine, which solves complex pattern recognition problems with easy-to-interpret propo- sitional formulas, composed by a collective of Tsetlin Automata. To eliminate the long- standing problem of vanishing signal-to-noise ratio, the Tsetlin Machine orchestrates the automata using a novel game. Our theoretical analysis establishes that the Nash equilibria of the game align with the propositional formulas that provide optimal pattern recognition accuracy. This translates to learning without local optima, only global ones. We argue that the Tsetlin Machine finds the propositional formula that provides optimal accuracy, with probability arbitrarily close to unity. In five benchmarks, the Tsetlin Machine provides competitive accuracy compared with SVMs, Decision Trees, Random Forests, Naive Bayes Classifier, Logistic Regression, and Neural Networks. The Tsetlin Machine further has an inherent computational advantage since both inputs, patterns, and outputs are expressed as bits, while recognition and learning rely on bit manipulation. The combination of accuracy, interpretability, and computational simplicity makes the Tsetlin Machine a promising tool for a wide range of domains. Being the first of its kind, we believe the Tsetlin Machine will kick-start new paths of research, with a potentially significant impact on the AI field and the applications of AI.

Keywords: Bandit Problem, Game Theory, Interpretable Pattern Recognition, Propo- sitional Logic, Tsetlin Automata Games, Online Learning




1.5 Paper Contributions

In this paper, we attack the limited pattern expression capability and vanishing signal-to-noise ratio of learning automata based pattern recognition, introducing the Tsetlin Machine. The contributions of the paper can be summarized as follows:


  • We introduce the Tsetlin Machine, which solves complex pattern recognition problems with propositional formulas, composed by a collective of Tsetlin Automata.


  • We eliminate the longstanding vanishing signal-to-noise ratio problem with a unique decentralized learning scheme based on game theory [30, 2]. The game we have designed allows thousands of Tsetlin Automata to successfully cooperate.


  • The game orchestrated by the Tsetlin Machine is based on resource allocation principles [31], in inter-play with frequent itemset mining [16]. By allocating sparse pattern rep- resentation resources according to the frequency of the patterns, the Tsetlin Machine is able to capture intricate unlabelled sub-patterns, for instance addressing the so-called Noisy XOR-problem.


  • Our theoretical analysis establishes that the Nash equilibria of the game are aligned with the propositional formulas that provide optimal pattern recognition accuracy. This translates to learning without local optima, only global ones.


  • We further argue that the Tsetlin Machine finds a propositional formula that provides optimal pattern recognition accuracy, with probability arbitrarily close to unity.


  • The propositional formulas are represented as bit patterns. These bit patterns are rela- tively easy to interpret, compared to e.g. a neural network (see Table 1 for an example bit pattern). This facilitates human quality assurance and scrutiny, which for instance can be important in safety-critical domains such as medicine.


  • The Tsetlin Machine is a new approach to global construction of decision rules. We demonstrate that decision rules for large-scale pattern recognition can be learned on- line, under particularly noisy conditions. To establish a common reference point towards related work in interpretable pattern recognition, we include empirical results on decision trees.


  • The Tsetlin Machine is particularly suited for digital computers, being directly based on bit manipulation with AND-, OR-, and NOT operators. Both input, hidden pat- terns, and output are represented directly as bits, while recognition and learning rely on manipulating those bits.


  • In our empirical evaluation on five datasets, the Tsetlin Machine provides competitive per- formance in comparison with Multilayer Perceptron Networks, Support Vector Machines, Decision Trees, Random Forests, the Naive Bayes Classifier, and Logistic Regression.



  • It further turns out that the Tsetlin Machine requires less data than neural networks, outperforming even the Naive Bayes Classifier in data sparse environments.


  • Overfitting is inherently combated by leveraging frequent itemset mining [16]. Even while accuracy on the training data approaches 99.9%, mean accuracy on the test data continues to increase as well. This is quite different from the behaviour of back-propagation in neural networks, where accuracy on test data starts to drop at some point, without proper regularization mechanisms.


  • We demonstrate how the Tsetlin Machine can be used as a building block to create more advanced architectures.

  • We believe that the combination of accuracy, interpretability, and computational simplicity makes the Tsetlin Machine a promising tool for a wide range of domains. Being the first of its kind, we further believe it will kick-start completely new paths of research, with a potentially significant impact on the field of AI and the applications of AI.




论文代码:https://github.com/cair/pyTsetlinMachine





欢迎加入打卡群自律学习强化学习,更欢迎支持或加入我们一起来刷榜!更多内容请访问公众号CreateAMind菜单


来刷你爱的榜!





登录查看更多
4

相关内容

模式识别是一个成熟的、令人兴奋的、快速发展的领域,它支撑着计算机视觉、图像处理、文本和文档分析以及神经网络等相关领域的发展。它与机器学习非常相似,在生物识别、生物信息学、多媒体数据分析和最新的数据科学等新兴领域也有应用。模式识别(Pattern Recognition)杂志成立于大约50年前,当时该领域刚刚出现计算机科学的早期。在这期间,它已大大扩大。只要这些论文的背景得到了清晰的解释并以模式识别文献为基础,该杂志接受那些对模式识别理论、方法和在任何领域的应用做出原创贡献的论文。 官网地址:http://dblp.uni-trier.de/db/conf/par/
【干货书】真实机器学习,264页pdf,Real-World Machine Learning
强化学习最新教程,17页pdf
专知会员服务
168+阅读 · 2019年10月11日
【新书】Python编程基础,669页pdf
专知会员服务
186+阅读 · 2019年10月10日
机器学习入门的经验与建议
专知会员服务
90+阅读 · 2019年10月10日
Transferring Knowledge across Learning Processes
CreateAMind
25+阅读 · 2019年5月18日
人工智能 | SCI期刊专刊/国际会议信息7条
Call4Papers
7+阅读 · 2019年3月12日
动物脑的好奇心和强化学习的好奇心
CreateAMind
10+阅读 · 2019年1月26日
人工智能 | 国际会议信息6条
Call4Papers
4+阅读 · 2019年1月4日
人工智能 | 国际会议信息10条
Call4Papers
5+阅读 · 2018年12月18日
人工智能 | COLT 2019等国际会议信息9条
Call4Papers
6+阅读 · 2018年9月21日
人工智能 | 国际会议/SCI期刊约稿信息9条
Call4Papers
3+阅读 · 2018年1月12日
计算机视觉近一年进展综述
机器学习研究会
8+阅读 · 2017年11月25日
计算机类 | 国际会议信息7条
Call4Papers
3+阅读 · 2017年11月17日
Arxiv
22+阅读 · 2019年11月24日
Pluralistic Image Completion
Arxiv
8+阅读 · 2019年3月11日
Relational Deep Reinforcement Learning
Arxiv
10+阅读 · 2018年6月28日
Arxiv
11+阅读 · 2018年4月25日
Arxiv
3+阅读 · 2018年3月28日
Arxiv
6+阅读 · 2018年2月28日
VIP会员
相关资讯
Transferring Knowledge across Learning Processes
CreateAMind
25+阅读 · 2019年5月18日
人工智能 | SCI期刊专刊/国际会议信息7条
Call4Papers
7+阅读 · 2019年3月12日
动物脑的好奇心和强化学习的好奇心
CreateAMind
10+阅读 · 2019年1月26日
人工智能 | 国际会议信息6条
Call4Papers
4+阅读 · 2019年1月4日
人工智能 | 国际会议信息10条
Call4Papers
5+阅读 · 2018年12月18日
人工智能 | COLT 2019等国际会议信息9条
Call4Papers
6+阅读 · 2018年9月21日
人工智能 | 国际会议/SCI期刊约稿信息9条
Call4Papers
3+阅读 · 2018年1月12日
计算机视觉近一年进展综述
机器学习研究会
8+阅读 · 2017年11月25日
计算机类 | 国际会议信息7条
Call4Papers
3+阅读 · 2017年11月17日
相关论文
Arxiv
22+阅读 · 2019年11月24日
Pluralistic Image Completion
Arxiv
8+阅读 · 2019年3月11日
Relational Deep Reinforcement Learning
Arxiv
10+阅读 · 2018年6月28日
Arxiv
11+阅读 · 2018年4月25日
Arxiv
3+阅读 · 2018年3月28日
Arxiv
6+阅读 · 2018年2月28日
Top
微信扫码咨询专知VIP会员