GNN_Review

GNN综述阅读报告,报告涵盖有多篇GNN方面的论文,以及一个按照论文《The Graph Neural Network Model 》使用pytorch编写的模型例子,该模型在人工数据上进行运行和验证。项目仓的结构树为

|-/GNN_Review.md         # GNN综述Markdown文档
|-/GNN_Review1.1.pdf     # GNN综述PDF版文档
|-/README.md             # README文档
|-/GNN示例代码/           # 示例代码文档
  |-images/              # 示例图像
  |-GNN实例.ipynb         # .ipynb文件(可直接使用jupyter运行)
  |-node_dict.json       # 中间的字典文件
|-/pic/                  # GNN综述相关图片
|-/PyG和Pytorch实现GNN模型 # PyG和Pytorch的GNN模型实现文档和代码
  |-cora/                # cora数据集
  |-pic/                 # 文档图片
  |-data/                # 数据集文件夹
  |-Cora数据集.md         # Cora数据集介绍文档
  |-GNN_Implement_with_Pytorch.ipynb  # 使用Pytorch实现GCN和Linear GNN示例
  |-GNN_Implemet_with_PyG.ipynb   # 使用PyG实现GCN示例
  |-GNN与子图匹配.ipynb    # GNN的子图匹配示例
  |-GNN的Batch示例.ipynb  # GNN训练的Batch实现示例
  |-PyG.md               # PyG框架阅读报告
  • GNN_Review报告的结构如下

成为VIP会员查看完整内容
1
58
图神经网络 (GNN) 是一种连接模型,它通过图的节点之间的消息传递来捕捉图的依赖关系。与标准神经网络不同的是,图神经网络保留了一种状态,可以表示来自其邻域的具有任意深度的信息。近年来,图神经网络(GNN)在社交网络、知识图、推荐系统、问答系统甚至生命科学等各个领域得到了越来越广泛的应用。

知识荟萃

精品入门和进阶教程、论文和代码整理等

更多

查看相关VIP内容、论文、资讯等

《图像处理手册》一直被评为计算机图像处理的最佳整体介绍,涵盖二维(2D)和三维(3D)成像技术、图像打印和存储方法、图像处理算法、图像和特征测量、定量图像测量分析等等。

  • 比以前的版本有更多的计算密集型算法
  • 提供更好的组织,更多的定量结果,和最新发展的新材料
  • 包括在3D成像和在统计分析上彻底修改的一章完全重写的章节
  • 包含超过1700个参考文献的理论,方法,和应用在广泛的学科
  • 呈现了500多个全新的人物和图像,其中超过三分之二是彩色的

《图像处理手册》第七版提供一个可接近的和最新的图像处理的处理,提供广泛的覆盖和算法的比较,方法,和结果。

成为VIP会员查看完整内容
0
55

【导读】来自东京RIKEN研究中心的Emtiyaz Khan在SPCOM2020上给了关于以贝叶斯原理进行深度学习的教程《Deep Learning with Bayesian Principles》,共有256页ppt,以及撰写了最新的论文,讲述贝叶斯和深度学习如何结合到一起进行学习新算法,提出了一种基于贝叶斯原理的学习规则,它使我们能够连接各种各样的学习算法。利用这一规则,可以在概率图形模型、连续优化、深度学习、强化学习、在线学习和黑盒优化等领域得到广泛的学习算法。非常具有启发性,值得查看!

教程地址: https://ece.iisc.ac.in/~spcom/2020/tutorials.html#Tut6

Deep Learning with Bayesian Principles

深度学习和贝叶斯学习被认为是两个完全不同的领域,通常用于互补的设置情景。显然,将这两个领域的思想结合起来是有益的,但鉴于它们的根本区别,我们如何才能做到这一点呢?

本教程将介绍现代贝叶斯原理来填补这一空白。利用这些原理,我们可以推出一系列学习算法作为特例,例如,从经典算法,如线性回归和前向后向算法,到现代深度学习算法,如SGD、RMSprop和Adam。然后,这个视图提供了新的方法来改进深度学习的各个方面,例如,不确定性、健壮性和解释。它也使设计新的方法来解决挑战性的问题,如那些出现在主动学习,持续学习,强化学习等。

总的来说,我们的目标是让贝叶斯和深度学习比以往任何时候都更接近,并激励它们一起工作,通过结合他们的优势来解决具有挑战性的现实问题。

成为VIP会员查看完整内容
0
49

Martin Grohe是一位计算机科学家,以其在参数化复杂性、数学逻辑、有限模型理论、图形逻辑、数据库理论和描述复杂性理论方面的研究而闻名。他是RWTH Aachen大学的计算机科学教授,在那里他担任离散系统逻辑和理论的主席。1999年,他获得了德国研究基金会颁发的海因茨·梅尔-莱布尼茨奖。他在2017年被选为ACM Fellow,因为他“对计算机科学中的逻辑、数据库理论、算法和计算复杂性的贡献”。

word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings of Structured Data 构建结构数据的向量嵌入理论

图和关系结构的向量表示,无论是手工制作的特征向量还是学习的表示,使我们能够将标准的数据分析和机器学习技术应用到结构中。在机器学习和知识表示文献中,广泛研究了产生这种嵌入的方法。然而,从理论的角度来看,向量嵌入得到的关注相对较少。从对已经在实践中使用的嵌入技术的调研开始,在这次演讲中,我们提出了两种理论方法,我们认为它们是理解向量嵌入基础的中心。我们将各种方法联系起来,并提出未来研究的方向。

典型机器学习算法需要将通常是符号数据表示为数字向量才能在结构化数据上计算。数据的向量表示从手工设计特征到学习表示,或者通过专用的嵌入算法计算,或者通过像图神经网络这样的学习架构隐式计算。机器学习方法的性能关键取决于向量表示的质量。因此,有大量的研究提出了广泛的矢量嵌入方法用于各种应用。这些研究大多是经验性的,通常针对特定的应用领域。考虑到主题的重要性,关于向量嵌入的理论工作少得令人惊讶,特别是当它表示超越度量信息(即图中的距离)的结构信息时。

本文的目的是概述在实践中使用的结构化数据的各种嵌入技术,并介绍可以理解和分析这些嵌入技术的理论思想。矢量嵌入的研究前景是笨拙的,由于不同的应用领域(如社会网络分析、知识图、化学信息学、计算生物学等)的推动,几个社区在很大程度上独立地研究相关问题。因此,我们需要有选择性,关注我们看到的共同想法和联系。

向量嵌入可以在关系数据的“离散”世界和机器学习的“可微分”世界之间架起一座桥梁,因此在数据库研究方面具有巨大的潜力。然而,除了知识图谱的二元关系之外,对关系数据的嵌入所做的工作相对较少。在整个论文中,我将试图指出关于向量嵌入的数据库相关研究问题的潜在方向。

成为VIP会员查看完整内容
0
45

机器学习中许多最重要概念路线图,如何学习它们以及使用什么工具来执行它们。

即:

🤔机器学习问题, 机器学习问题是什么样子? ♻️机器学习过程—一旦你发现一个问题,你会采取什么步骤来解决吗? 🛠—你该怎么使用机器学习工具来构建解决方案吗? 🧮机器学习数学,哪些部分机器学习代码要你写? 📚机器学习资源——好吧, 很酷,我该如何学习呢?

地址:

https://github.com/mrdbourke/machine-learning-roadmap

成为VIP会员查看完整内容
0
42

作者:王东,利节,许莎 出版社:清华大学 出版时间:2019-10 ISBN: 978-7-302-53187-6 最新消息:本书配套课件免费公开,详见链接。 地址:

http://cslt.riit.tsinghua.edu.cn/news.php?title=News-2020-02-09

作者序 2016年以来,几乎所有人都在谈论人工智能,上至专家巨富,下至平民百姓。然而, 究竟什么是人工智能?人工智能与传统科学有何区别和联系?人工智能的历史沿革和 未来方向?这些问题在很多人脑海里还是模糊的。唯一可以确定的是,人工智能 技术必然会对我们的生活产生深远的影响,这种影响会象蒸汽机、电、计算机的出现对我们的影 响一样,成为我们未来生活的一部分。

我是学计算机出身,自1998年以来主要从事语音和语言信号处理工作。这个领域 当然是人工智能的一部分,但绝大多数时候研究者们很少提到AI。原因有很多, 对我而言也许是对归类法的执着,AI的范围太广了,当面带微笑和别人说` 我是做 AI的'总会有一种心虚的感觉。这种感觉应该是很多一线研究者的潜意识。

仅管对AI这个头衔有天然排斥,我们和这个古老而年轻的领域依然脱不了干系, 因此当然希望更多年轻人加入到AI研究队伍中来,特别是从方法论的角度去理解AI, 避免概念上的炒作和空洞化。

基于这一思路,我用了将近两年时间完成了一本题为《现代机器学习技术导论》的学习笔记[link], 恰好被利节老师看到。她提出建议:这本书应该让更多年轻人看到,但当前这个版本是不行的, 需要更通俗和直观的表达。这个建议得到重庆巴蜀中学许莎老师的赞同,她觉得 应该有一本通俗的读物,让高中生甚至初中生理解人工智能,在不增加日常学习 压力的前提下,满足他们对新知识的渴求,从一开始就树立一个正确的概念体系 和科学根基,为以后从事这方面的工作打下基础。

于是有了这本书。我们的目的只有一个:用浅显的的语言向年轻人 介绍什么是人工智能,包括:人工智能有哪些主流技术,这些技术从何处来,到哪里去。特别重要的是, 我们希望提供一系列小实验,让学生可以自己动手实现一些有趣的人工智能系统, 培养出这一方面的兴趣,那就很好了。

在本书的成书过程中,众多老师和学生提供了热心帮助。清华大学的朱小燕老师对全书进行了审读, 周强老师、刘华平老师分别对第四章和第五章进行了审读。清华大学语音语言实验室 的蔡云麒博士参与了校订工作,实习生杜文强、张阳、吴嘉瑶、齐诏娣、于嘉威、 姜修齐、刘逸博、汪洋等参与了实验样例设计。最后,清华出版社的刘翰鹏老师在本 书出版过程中付出了大量心血,在此一并致谢!

成为VIP会员查看完整内容
0
40

最近东北大学自然语言处理实验室在Github上发布了自然语言处理与机器学习最新综述论文合集,共有358篇之多,涵盖ML&nlp众多主题 , 是一份非常不错的指南!

地址: https://github.com/NiuTrans/ABigSurvey#architectures

在本文中,我们调研了数百篇关于自然语言处理(NLP)和机器学习(ML)的综述论文。我们将这些论文按热门话题分类,并对一些有趣的问题进行简单计算。此外,我们还显示了论文的url列表(358篇论文)。

A Survey of Surveys (NLP & ML)

Natural Language Processing Lab., School of Computer Science and Engineering, Northeastern University

NiuTrans Research

In this document, we survey hundreds of survey papers on Natural Language Processing (NLP) and Machine Learning (ML). We categorize these papers into popular topics and do simple counting for some interesting problems. In addition, we show the list of the papers with urls (358 papers).

Categorization

We follow the ACL and ICML submission guideline of recent years, covering a broad range of areas in NLP and ML. The categorization is as follows:

To reduce class imbalance, we separate some of the hot sub-topics from the original categorization of ACL and ICML submissions. E.g., NER is a first-level area in our categorization because it is the focus of several surveys.

Statistics

We show the number of paper in each area in Figures 1-2.

Figure 1: # of papers in each NLP area.

Figure 2: # of papers in each ML area..

Also, we plot paper number as a function of publication year (see Figure 3).

Figure 3: # of papers vs publication year.

In addition, we generate word clouds to show hot topics in these surveys (see Figures 4-5).

Figure 4: The word cloud for NLP.

Figure 5: The word cloud for ML.

The NLP Paper List

Computational Social Science and Social Media

  1. Computational Sociolinguistics: A Survey. Computational Linguistics 2016 paper

    Dong Nguyen, A Seza Dogruoz, Carolyn Penstein Rose, Franciska De Jong

Dialogue and Interactive Systems

  1. A Comparative Survey of Recent Natural Language Interfaces for Databases. VLDB 2019 paper

    Katrin Affolter, Kurt Stockinger, Abraham Bernstein

  2. A Survey of Arabic Dialogues Understanding for Spontaneous Dialogues and Instant Message. arXiv 2015 paper

    AbdelRahim A. Elmadany, Sherif M. Abdou, Mervat Gheith

  3. A Survey of Available Corpora for Building Data-Driven Dialogue Systems. arXiv 2015 paper

    Iulian Vlad Serban, Ryan Lowe, Peter Henderson, Laurent Charlin, Joelle Pineau

  4. A Survey of Document Grounded Dialogue Systems. arXiv 2020 paper

    Longxuan Ma, Wei-Nan Zhang, Mingda Li, Ting Liu

  5. A Survey of Natural Language Generation Techniques with a Focus on Dialogue Systems - Past, Present and Future Directions. arXiv 2019 paper

    Sashank Santhanam, Samira Shaikh

  6. A Survey on Dialog Management: Recent Advances and Challenges. arXiv 2020 paper

    Yinpei Dai, Huihua Yu, Yixuan Jiang, Chengguang Tang, Yongbin Li, Jian Sun

  7. A Survey on Dialogue Systems: Recent Advances and New Frontiers. Sigkdd Explorations 2017 paper

    Hongshen Chen, Xiaorui Liu, Dawei Yin, Jiliang Tang

  8. Challenges in Building Intelligent Open-domain Dialog Systems. arXiv 2019 paper

    Minlie Huang, Xiaoyan Zhu, Jianfeng Gao

  9. Neural Approaches to Conversational AI. ACL 2018 paper

    Jianfeng Gao, Michel Galley, Lihong Li

  10. Recent Advances and Challenges in Task-oriented Dialog System. arXiv 2020 paper

    Zheng Zhang, Ryuichi Takanobu, Minlie Huang, Xiaoyan Zhu

Generation

  1. A bit of progress in language modeling. arXiv 2001 paper

    Joshua T. Goodman

  2. A Survey of Paraphrasing and Textual Entailment Methods. Journal of Artificial Intelligence Research 2010 paper

    Ion Androutsopoulos, Prodromos Malakasiotis

  3. A Survey on Neural Network Language Models. arXiv 2019 paper

    Kun Jing, Jungang Xu

  4. Neural Text Generation: Past, Present and Beyond. arXiv 2018 paper

    Sidi Lu, Yaoming Zhu, Weinan Zhang, Jun Wang, Yong Yu

  5. Pre-trained Models for Natural Language Processing : A Survey. arXiv 2020 paper

    Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, Xuanjing Huang

  6. Recent Advances in Neural Question Generation. arXiv 2019 paper

    Liangming Pan, Wenqiang Lei, Tat-Seng Chua, Min-Yen Kan

  7. Recent Advances in SQL Query Generation: A Survey. arXiv 2020 paper

    Jovan Kalajdjieski, Martina Toshevska, Frosina Stojanovska

  8. Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation. Journal of Artificial Intelligence Research 2018 paper

    Albert Gatt,Emiel Krahmer

Information Extraction

  1. A Survey of Deep Learning Methods for Relation Extraction. arXiv 2017 paper

    Shantanu Kumar

  2. A Survey of Event Extraction From Text. IEEE 2019 paper

    Wei Xiang, Bang Wang

  3. A Survey of Neural Network Techniques for Feature Extraction from Text. arXiv 2017 paper

    Vineet John

  4. A Survey on Open Information Extraction. COLING 2018 paper

    Christina Niklaus, Matthias Cetto, André Freitas, Siegfried Handschuh

  5. A Survey on Temporal Reasoning for Temporal Information Extraction from Text (Extended Abstract). arXiv 2019 paper

    Artuur Leeuwenberg, Marie-Francine Moens

  6. Automatic Extraction of Causal Relations from Natural Language Texts: A Comprehensive Survey. arXiv 2016 paper

    Nabiha Asghar

  7. Content Selection in Data-to-Text Systems: A Survey. arXiv 2016 paper

    Dimitra Gkatzia

  8. Keyphrase Generation: A Multi-Aspect Survey. FRUCT 2019 paper

    Erion Cano, Ondrej Bojar

  9. More Data, More Relations, More Context and More Openness: A Review and Outlook for Relation Extraction. arXiv 2020 paper

    Xu Han, Tianyu Gao, Yankai Lin, Hao Peng, Yaoliang Yang, Chaojun Xiao, Zhiyuan Liu, Peng Li, Maosong Sun, Jie Zhou:

  10. Relation Extraction : A Survey. arXiv 2017 paper

    Sachin Pawar, Girish K. Palshikar, Pushpak Bhattacharyya

  11. Short Text Topic Modeling Techniques, Applications, and Performance: A Survey. arXiv 2019 paper

    Jipeng Qiang, Zhenyu Qian, Yun Li, Yunhao Yuan, Xindong Wu

Information Retrieval and Text Mining

  1. A Brief Survey of Text Mining: Classification, Clustering and Extraction Techniques. arXiv 2017 paper

    Mehdi Allahyari, Seyed Amin Pouriyeh, Mehdi Assefi, Saied Safaei, Elizabeth D. Trippe, Juan B. Gutierrez, Krys Kochut

  2. A survey of methods to ease the development of highly multilingual text mining applications. language resources and evaluation 2012 paper

    Ralf Steinberger

  3. Opinion Mining and Analysis: A survey. IJNLC 2013 paper

    Arti Buche, M. B. Chandak, Akshay Zadgaonkar

Interpretability and Analysis of Models for NLP

  1. Analysis Methods in Neural Language Processing: A Survey. NACCL 2018 paper

    Yonatan Belinkov, James R. Glass

  2. Analyzing and Interpreting Neural Networks for NLP: A Report on the First BlackboxNLP Workshop. EMNLP 2019 paper

    Afra Alishahi, Grzegorz Chrupala, Tal Linzen

  3. Beyond Leaderboards: A survey of methods for revealing weaknesses in Natural Language Inference data and models. arXiv 2020 paper

    Viktor Schlegel, Goran Nenadic, Riza Batista-Navarro

  4. Visualizing Natural Language Descriptions: A Survey. ACM 2016 paper

    Kaveh Hassani, Won-Sook Lee

  5. When do Word Embeddings Accurately Reflect Surveys on our Beliefs About People?. ACL 2020 paper

    Kenneth Joseph, Jonathan H. Morgan

Knowledge Graph

  1. A survey of techniques for constructing chinese knowledge graphs and their applications. mdpi 2018 paper

    Tianxing Wu, Guilin Qi, Cheng Li, Meng Wang

  2. A Survey on Knowledge Graphs: Representation, Acquisition and Applications. arXiv 2020 paper

    Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Marttinen, Philip S. Yu:

  3. Knowledge Graph Embedding for Link Prediction: A Comparative Analysis. arXiv 2016 paper

    Andrea Rossi, Donatella Firmani, Antonio Matinata, Paolo Merialdo, Denilson Barbosa

  4. Knowledge Graph Embedding: A Survey of Approaches and Applications. IEEE 2017 paper

    Quan Wang, Zhendong Mao, Bin Wang, Li Guo

  5. Knowledge Graphs. arXiv 2020 paper

    Aidan Hogan, Eva Blomqvist, Michael Cochez, Claudia d'Amato, Gerard de Melo, Claudio Gutierrez, José Emilio Labra Gayo, Sabrina Kirrane, Sebastian Neumaier, Axel Polleres, Roberto Navigli, Axel-Cyrille Ngonga Ngomo, Sabbir M. Rashid, Anisa Rula, Lukas Schmelzeisen, Juan F. Sequeda, Steffen Staab, Antoine Zimmermann

Language Grounding to Vision and Robotics and Beyond

  1. Emotionally-Aware Chatbots: A Survey. arXiv 2018 paper

    Endang Wahyu Pamungkas

  2. Trends in Integration of Vision and Language Research: A Survey of Tasks, Datasets, and Methods. arXiv 2019 paper

    Aditya Mogadala, Marimuthu Kalimuthu, Dietrich Klakow

Linguistic Theories and Cognitive Modeling and Psycholinguistics

  1. Modeling Language Variation and Universals: A Survey on Typological Linguistics for Natural Language Processing. Comput. Linguistics 45(3) 2019 paper

    Edoardo Maria Ponti, Helen O'Horan, Yevgeni Berzak, Ivan Vulic, Roi Reichart, Thierry Poibeau, Ekaterina Shutova, Anna Korhonen

  2. Survey on the Use of Typological Information in Natural Language Processing. COLING 2016 paper

    Helen O'Horan, Yevgeni Berzak, Ivan Vulic, Roi Reichart, Anna Korhonen

Machine Learning for NLP

  1. A comprehensive survey of mostly textual document segmentation algorithms since 2008. Pattern Recognition 2017 paper

    Sébastien Eskenazi, Petra Gomez-Kramer, Jean-Marc Ogier

  2. A Primer on Neural Network Models for Natural Language Processing. arXiv 2015 paper

    Yoav Goldberg

  3. A Survey Of Cross-lingual Word Embedding Models. Journal of Artificial Intelligence Research 2019 paper

    Sebastian Ruder, Ivan Vulic, Anders Sogaard

  4. A Survey of Neural Networks and Formal Languages. arXiv 2020 paper

    Joshua Ackerman, George Cybenko

  5. A Survey of the Usages of Deep Learning in Natural Language Processing. IEEE 2018 paper

    Daniel W. Otter, Julian R. Medina, Jugal K. Kalita

  6. A Survey on Contextual Embeddings. arXiv 2020 paper

    Qi Liu, Matt J. Kusner, Phil Blunsom

  7. Adversarial Attacks and Defense on Texts: A Survey. arXiv 2020 paper

    Aminul Huq, Mst. Tasnim Pervin

  8. Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey. arXiv 2019 paper

    Wei Emma Zhang, Quan Z Sheng, Ahoud Alhazmi, Chenliang Li

  9. An Introductory Survey on Attention Mechanisms in NLP Problems. IntelliSys 2019 paper

    Dichao Hu

  10. Attention in Natural Language Processing. arXiv 2019 paper

    Andrea Galassi, Marco Lippi, Paolo Torroni

  11. From static to dynamic word representations: a survey. ICMLC 2020 paper

    Yuxuan Wang, Yutai Hou, Wanxiang Che, Ting Liu

  12. From Word to Sense Embeddings: A Survey on Vector Representations of Meaning. Journal of Artificial Intelligence Research 2018 paper

    Jose Camachocollados, Mohammad Taher Pilehvar

  13. Natural Language Processing Advancements By Deep Learning: A Survey. arXiv 2020 paper

    Amirsina Torfi, Rouzbeh A. Shirvani, Yaser Keneshloo, Nader Tavvaf, Edward A. Fox

  14. Neural Network Models for Paraphrase Identification, Semantic Textual Similarity, Natural Language Inference, and Question Answering. COLING 2018 paper

    Wuwei Lan,Wei Xu

  15. Recent Trends in Deep Learning Based Natural Language Processing. IEEE 2018 paper

    Tom Young, Devamanyu Hazarika, Soujanya Poria, Erik Cambria

  16. Symbolic, Distributed and Distributional Representations for Natural Language Processing in the Era of Deep Learning: a Survey. arXiv 2017 paper

    Lorenzo Ferrone, Fabio Massimo Zanzotto

  17. Towards a Robust Deep Neural Network in Texts: A Survey. arXiv 2020 paper

    Wenqi Wang, Lina Wang, Run Wang, Zhibo Wang, Aoshuang Ye

  18. Word Embeddings: A Survey. arXiv 2019 paper

    Felipe Almeida, Geraldo Xexéo

Machine Translation

  1. A Brief Survey of Multilingual Neural Machine Translation. arXiv 2019 paper

    Raj Dabre, Chenhui Chu, Anoop Kunchukuttan

  2. A Comprehensive Survey of Multilingual Neural Machine Translation. arXiv 2020 paper

    Raj Dabre, Chenhui Chu, Anoop Kunchukuttan

  3. A Survey of Deep Learning Techniques for Neural Machine Translation. arXiv 2020 paper

    Shuoheng Yang, Yuxin Wang, Xiaowen Chu

  4. A Survey of Domain Adaptation for Neural Machine Translation. COLING 2018 paper

    Chenhui Chu, Rui Wang

  5. A Survey of Methods to Leverage Monolingual Data in Low-resource Neural Machine Translation. arXiv 2019 paper

    Ilshat Gibadullin, Aidar Valeev, Albina Khusainova, Adil Mehmood Khan

  6. A Survey of Multilingual Neural Machine Translation. arXiv 2020 paper

    Raj Dabre, Chenhui Chu, Anoop Kunchukuttan

  7. A Survey of Word Reordering in Statistical Machine Translation: Computational Models and Language Phenomena. Comput Linguistics 2016 paper

    Arianna Bisazza, Marcello Federico

  8. A Survey on Document-level Machine Translation: Methods and Evaluation. arXiv 2019 paper

    Sameen Maruf, Fahimeh Saleh, Gholamreza Haffari

  9. Machine Translation Approaches and Survey for Indian Languages. arXiv 2017 paper

    Nadeem Jadoon Khan, Waqas Anwar, Nadir Durrani

  10. Machine Translation Evaluation Resources and Methods: A Survey. arXiv 2018 paper

    Lifeng Han

  11. Machine Translation using Semantic Web Technologies: A Survey. Journal of Web Semantics 2018 paper

    Diego Moussallem, Matthias Wauer, Axelcyrille Ngonga Ngomo

  12. Machine-Translation History and Evolution: Survey for Arabic-English Translations. arXiv 2017 paper

    Nabeel T. Alsohybe, Neama Abdulaziz Dahan, Fadl Mutaher Baalwi

  13. Neural Machine Translation and Sequence-to-Sequence Models: A Tutorial. arXiv 2017 paper

    Graham Neubig

  14. Neural Machine Translation: A Review. arXiv 2019 paper

    Felix Stahlberg

  15. Neural Machine Translation: Challenges, Progress and Future. arXiv 2020 paper

    Jiajun Zhang, Chengqing Zong

  16. The Query Translation Landscape: a Survey. arXiv 2019 paper

    Mohamed Nadjib Mami, Damien Graux, Harsh Thakkar, Simon Scerri, Soren Auer, Jens Lehmann

Natural Language Processing

  1. A Survey and Classification of Controlled Natural Languages. Comput. Linguistics 2014 paper

    Tobias Kuhn

  2. Jumping NLP curves: A review of natural language processing research. IEEE 2014 paper

    Erik Cambria ; Bebo White

  3. Natural Language Processing - A Survey. arXiv 2012 paper

    Kevin Mote

  4. Natural Language Processing: State of The Art, Current Trends and Challenges. arXiv 2017 paper

    Diksha Khurana, Aditya Koli, Kiran Khatter, Sukhdev Singh

NER

  1. A survey of named entity recognition and classification. Lingvistic Investigationes 2007 paper

    David Nadeau, Satoshi Sekine

  2. A Survey of Named Entity Recognition in Assamese and other Indian Languages. arXiv 2014 paper

    Gitimoni Talukdar, Pranjal Protim Borah, Arup Baruah

  3. A Survey on Deep Learning for Named Entity Recognition. arXiv 2018 paper

    Jing Li, Aixin Sun, Jianglei Han, Chenliang Li

  4. A Survey on Recent Advances in Named Entity Recognition from Deep Learning models. COLING 2019 paper

    Vikas Yadav, Steven Bethard

  5. Design Challenges and Misconceptions in Neural Sequence Labeling. COLING 2018 paper

    Jie Yang, Shuailong Liang, Yue Zhang

  6. Neural Entity Linking: A Survey of Models based on Deep Learning. arXiv 2020 paper

    Ozge Sevgili, Artem Shelmanov, Mikhail Arkhipov, Alexander Panchenko, Chris Biemann

NLP Applications

  1. A Comprehensive Survey of Grammar Error arXivection. arXiv 2020 paper

    Yu Wang, Yuelin Wang, Jie Liu, Zhuo Liu

  2. A Short Survey of Biomedical Relation Extraction Techniques. arXiv 2017 paper

    Elham Shahab

  3. A Survey on Natural Language Processing for Fake News Detection. LREC 2020 paper

    Ray Oshikawa, Jing Qian, William Yang Wang

  4. Automatic Language Identification in Texts: A Survey. J. Artif. Intell. Res. 65 2019 paper

    Tommi Jauhiainen

  5. Disinformation Detection: A review of linguistic feature selection and classification models in news veracity assessments. arXiv 2019 paper

    Jillian Tompkins

  6. Extraction and Analysis of Fictional Character Networks: A Survey. ACM 2019 paper

    Xavier Bost (LIA), Vincent Labatut (LIA)

  7. Fake News Detection using Stance Classification: A Survey. arXiv 2019 paper

    Anders Edelbo Lillie, Emil Refsgaard Middelboe

  8. Fake News: A Survey of Research, Detection Methods, and Opportunities. ACM 2018 paper

    Xinyi Zhou, Reza Zafarani

  9. Image Captioning based on Deep Learning Methods: A Survey. arXiv 2019 paper

    Yiyu Wang, Jungang Xu, Yingfei Sun, Ben He

  10. SECNLP: A Survey of Embeddings in Clinical Natural Language Processing. J. Biomed. Informatics 2019 paper

    Kalyan KS, S Sangeetha

  11. Survey of Text-based Epidemic Intelligence: A Computational Linguistic Perspective. ACM 2019 paper

    Aditya Joshi, Sarvnaz Karimi, Ross Sparks, Cecile Paris, C Raina MacIntyre

  12. Text Detection and Recognition in the Wild: A Review. arXiv 2020 paper

    Zobeir Raisi, Mohamed A. Naiel, Paul Fieguth, Steven Wardell, John Zelek

  13. Text Recognition in the Wild: A Survey. arXiv 2020 paper

    Xiaoxue Chen, Lianwen Jin, Yuanzhi Zhu, Canjie Luo, Tianwei Wang

Question Answering

  1. A survey on question answering technology from an information retrieval perspective. Information Sciences 2011 paper

    Oleksandr Kolomiyets, Marie-Francine Moens:

  2. A Survey on Why-Type Question Answering Systems. arXiv 2019 paper

    Manvi Breja, Sanjay Kumar Jain:

  3. Core techniques of question answering systems over knowledge bases: a survey. SpringerLink 2017 paper

    Dennis Diefenbach, Vanessa Lopez, Kamal Singh & Pierre Maret

  4. Introduction to Neural Network based Approaches for Question Answering over Knowledge Graphs. arXiv 2019 paper

    Nilesh Chakraborty,Denis Lukovnikov,Gaurav Maheshwari,Priyansh Trivedi,Jens Lehmann,Asja Fischer:

  5. Survey of Visual Question Answering: Datasets and Techniques. arXiv 2017 paper

    Akshay Kumar Gupta

  6. Text-based Question Answering from Information Retrieval and Deep Neural Network Perspectives: A Survey. arXiv 2020 paper

    Zahra Abbasiyantaeb, Saeedeh Momtazi:

  7. Tutorial on Answering Questions about Images with Deep Learning. arXiv 2016 paper

    Mateusz Malinowski, Mario Fritz:

  8. Visual Question Answering using Deep Learning: A Survey and Performance Analysis. arXiv 2019 paper

    Yash Srivastava, Vaishnav Murali, Shiv Ram Dubey, Snehasis Mukherjee:

Reading Comprehension

  1. A Survey on Machine Reading Comprehension Systems. arXiv 2020 paper

    Razieh Baradaran, Razieh Ghiasi, Hossein Amirkhani:

  2. A Survey on Neural Machine Reading Comprehension. arXiv 2019 paper

    Boyu Qiu, Xu Chen, Jungang Xu, Yingfei Sun:

  3. Machine Reading Comprehension: a Literature Review. arXiv 2019 paper

    Xin Zhang, An Yang, Sujian Li, Yizhong Wang

  4. Machine Reading Comprehension: The Role of Contextualized Language Models and Beyond. arXiv 2020 paper

    Zhuosheng Zhang, Hai Zhao, Rui Wang

  5. Neural Machine Reading Comprehension: Methods and Trends. arXiv 2019 paper

    Shanshan Liu, Xin Zhang, Sheng Zhang, Hui Wang, Weiming Zhang:

Recommender Systems

  1. A review on deep learning for recommender systems: challenges and remedies. SpringerLink 2019 paper

    Zeynep Batmaz, Ali Yurekli, Alper Bilge, Cihan Kaleli:

  2. A Survey on Knowledge Graph-Based Recommender Systems. arXiv 2020 paper

    Qingyu Guo, Fuzhen Zhuang, Chuan Qin, Hengshu Zhu, Xing Xie, Hui Xiong, Qing He

  3. Adversarial Machine Learning in Recommender Systems: State of the art and Challenges. ACM 2020 paper

    Yashar Deldjoo, Tommaso Di Noia, Felice Antonio Merra

  4. Cross Domain Recommender Systems: A Systematic Literature Review. ACM 2017 paper

    Muhammad Murad Khan,Roliana Ibrahim,Imran Ghani

  5. Deep Learning based Recommender System: A Survey and New Perspectives. ACM 2019 paper

    Shuai Zhang, Lina Yao, Aixin Sun, Yi Tay:

  6. Deep Learning on Knowledge Graph for Recommender System: A Survey. ACM 2020 paper

    Yang Gao, Yi-Fan Li, Yu Lin, Hang Gao, Latifur Khan

  7. Explainable Recommendation: A Survey and New Perspectives. arXiv 2020 paper

    Yongfeng Zhang, Xu Chen:

  8. Sequence-Aware Recommender Systems. ACM 2018 paper

    Massimo Quadrana,Paolo Cremonesi,Dietmar Jannach

  9. Use of Deep Learning in Modern Recommendation System: A Summary of Recent Works. arXiv 2017 paper

    Ayush Singhal, Pradeep Sinha, Rakesh Pant:

Resources and Evaluation

  1. A Short Survey on Sense-Annotated Corpora. LREC 2020 paper

    Tommaso Pasini, José Camacho-Collados:

  2. A Survey of Current Datasets for Vision and Language Research. EMNLP 2015 paper

    Francis Ferraro, Nasrin Mostafazadeh, Ting-Hao (Kenneth) Huang, Lucy Vanderwende, Jacob Devlin, Michel Galley, Margaret Mitchell:

  3. A Survey of Word Embeddings Evaluation Methods. arXiv 2018 paper

    Amir Bakarov

  4. Critical Survey of the Freely Available Arabic Corpora. arXiv 2017 paper

    Wajdi Zaghouani:

  5. Distributional Measures of Semantic Distance: A Survey. arXiv 2012 paper

    Saif Mohammad, Graeme Hirst:

  6. Measuring Sentences Similarity: A Survey. Indian Journal of Science and Technology 2019 paper

    Mamdouh Farouk:

  7. Recent Advances in Natural Language Inference: A Survey of Benchmarks, Resources, and Approaches. arXiv 2020 paper

    Shane Storks, Qiaozi Gao, Joyce Y. Chai

  8. Survey on Evaluation Methods for Dialogue Systems. arXiv 2019 paper

    Jan Deriu, álvaro Rodrigo, Arantxa Otegi, Guillermo Echegoyen, Sophie Rosset, Eneko Agirre, Mark Cieliebak:

  9. Survey on Publicly Available Sinhala Natural Language Processing Tools and Research. arXiv 2019 paper

    Nisansa de Silva

Semantics

  1. Diachronic word embeddings and semantic shifts: a survey. COLING 2018 paper

    Andrey Kutuzov, Lilja Ovrelid, Terrence Szymanski, Erik Velldal

  2. Evolution of Semantic Similarity -- A Survey. arXiv 2020 paper

    Dhivya Chandrasekaran, Vijay Mago

  3. Semantic search on text and knowledge bases. Foundations and trends in information retrieval 2016 paper

    Hannah Bast , Bjorn Buchhold, Elmar Haussmann

  4. Semantics, Modelling, and the Problem of Representation of Meaning -- a Brief Survey of Recent Literature. arXiv 2014 paper

    Yarin Gal

  5. Survey of Computational Approaches to Lexical Semantic Change. arXiv 2019 paper

    Nina Tahmasebi, Lars Borin, Adam Jatowt

  6. Word sense disambiguation: a survey. ACM 2015 paper

    Alok Ranjan Pal, Diganta Saha

Sentiment Analysis and Stylistic Analysis and Argument Mining

  1. A Comprehensive Survey on Aspect Based Sentiment Analysis. arXiv 2020 paper

    Kaustubh Yadav

  2. A Survey on Sentiment and Emotion Analysis for Computational Literary Studies. arXiv 2018 paper

    Evgeny Kim, Roman Klinger

  3. Beneath the Tip of the Iceberg: Current Challenges and New Directions in Sentiment Analysis Research. arXiv 2020 paper

    Soujanya Poria, Devamanyu Hazarika, Navonil Majumder, Rada Mihalcea

  4. Deep Learning for Aspect-Level Sentiment Classification: Survey, Vision, and Challenges. IEEE 2019 paper

    Jie Zhou, Jimmy Xiangji Huang, Qin Chen, Qinmin Vivian Hu, Tingting Wang, Liang He

  5. Deep Learning for Sentiment Analysis : A Survey. Wiley Interdisciplinary Reviews-Data Mining and Knowledge Discovery 2018 paper

    Lei Zhang, Shuai Wang, Bing Liu

  6. Sentiment analysis for Arabic language: A brief survey of approaches and techniques. arXiv 2018 paper

    Mo'ath Alrefai, Hossam Faris, Ibrahim Aljarah

  7. Sentiment Analysis of Czech Texts: An Algorithmic Survey. ICAART 2019 paper

    Erion Cano, Ondrej Bojar

  8. Sentiment Analysis of Twitter Data: A Survey of Techniques. arXiv 2016 paper

    Vishal.A.Kharde, Prof. Sheetal.Sonawane

  9. Sentiment Analysis on YouTube: A Brief Survey. arXiv 2015 paper

    Muhammad Zubair Asghar, Shakeel Ahmad, Afsana Marwat, Fazal Masud Kundi

  10. Sentiment/Subjectivity Analysis Survey for Languages other than English. Social Netw. Analys. Mining 2016 paper

    Mohammed Korayem, Khalifeh Aljadda, David Crandall

  11. Word Embeddings for Sentiment Analysis: A Comprehensive Empirical Survey. arXiv 2019 paper

    Erion Cano, Maurizio Morisio

Speech and Multimodality

  1. A Comprehensive Survey on Cross-modal Retrieval. arXiv 2016 paper

    Kaiye Wang

  2. A Survey and Taxonomy of Adversarial Neural Networks for Text-to-Image Synthesis. arXiv 2019 paper

    Jorge Agnese, Jonathan Herrera, Haicheng Tao, Xingquan Zhu

  3. A Survey of Code-switched Speech and Language Processing. arXiv 2019 paper

    Sunayana Sitaram, Khyathi Raghavi Chandu, Sai Krishna Rallabandi, Alan W. Black

  4. A Survey of Recent DNN Architectures on the TIMIT Phone Recognition Task. TSD 2018 paper

    Josef Michálek, Jan Vanek

  5. A Survey of Voice Translation Methodologies - Acoustic Dialect Decoder. arXiv 2016 paper

    Hans Krupakar, Keerthika Rajvel, Bharathi B, Angel Deborah S, Vallidevi Krishnamurthy

  6. Automatic Description Generation from Images: A Survey of Models, Datasets, and Evaluation Measures. IJCAI 2017 paper

    Raffaella Bernardi, Ruket Cakici, Desmond Elliott, Aykut Erdem, Erkut Erdem, Nazli Ikizler-Cinbis, Frank Keller, Adrian Muscat, Barbara Plank

  7. Informed Machine Learning -- A Taxonomy and Survey of Integrating Knowledge into Learning Systems. arXiv 2019 paper

    Laura von Rueden, Sebastian Mayer, Katharina Beckh, Bogdan Georgiev, Sven Giesselbach, Raoul Heese, Birgit Kirsch, Julius Pfrommer, Annika Pick, Rajkumar Ramamurthy, Michal Walczak, Jochen Garcke, Christian Bauckhage, Jannis Schuecker

  8. Multimodal Machine Learning: A Survey and Taxonomy. IEEE 2019 paper

    Tadas Baltrusaitis, Chaitanya Ahuja, Louis-Philippe Morency

  9. Speech and Language Processing. Stanford University 2019 paper

    Dan Jurafsky and James H. Martin

Summarization

  1. A Survey on Neural Network-Based Summarization Methods. arXiv 2018 paper

    Yue Dong

  2. Abstractive Summarization: A Survey of the State of the Art. AAAI 2019 paper

    Hui Lin, Vincent Ng

  3. Automated text summarisation and evidence-based medicine: A survey of two domains. arXiv 2017 paper

    Abeed Sarker, Diego Mollá Aliod, Cécile Paris

  4. Automatic Keyword Extraction for Text Summarization: A Survey. arXiv 2017 paper

    Santosh Kumar Bharti, Korra Sathya Babu

  5. From Standard Summarization to New Tasks and Beyond: Summarization with Manifold Information. arXiv 2020 paper

    Shen Gao, Xiuying Chen, Zhaochun Ren, Dongyan Zhao, Rui Yan

  6. Neural Abstractive Text Summarization with Sequence-to-Sequence Models: A Survey. arXiv 2018 paper

    Tian Shi, Yaser Keneshloo, Naren Ramakrishnan, Chandan K. Reddy

  7. Recent automatic text summarization techniques: a survey. Artificial Intelligence Review 2016 paper

    Mahak Gambhir, Vishal Gupta

  8. Text Summarization Techniques: A Brief Survey. IJCAI 2017 paper

    Mehdi Allahyari, Seyedamin Pouriyeh, Mehdi Assefi, Saeid Safaei, Elizabeth D. Trippe, Juan B. Gutierrez, Krys Kochut

Tagging Chunking Syntax and Parsing

  1. A Neural Entity Coreference Resolution Review. arXiv 2019 paper

    Nikolaos Stylianou, Ioannis Vlahavas

  2. A survey of cross-lingual features for zero-shot cross-lingual semantic parsing. arXiv 2019 paper

    Jingfeng Yang, Federico Fancellu, Bonnie L. Webber

  3. A Survey on Semantic Parsing. AKBC 2019 paper

    Aishwarya Kamath, Rajarshi Das

  4. The Gap of Semantic Parsing: A Survey on Automatic Math Word Problem Solvers. arXiv 2018 paper

    Dongxiang Zhang, Lei Wang, Nuo Xu, Bing Tian Dai, Heng Tao Shen

Text Classification

  1. A Survey of Naive Bayes Machine Learning approach in Text Document Classification. IJCSIS 2010 paper

    K. A. Vidhya, G. Aghila

  2. A survey on phrase structure learning methods for text classification. IJNLC 2014 paper

    Reshma Prasad, Mary Priya Sebastian

  3. Deep Learning Based Text Classification: A Comprehensive Review. arXiv 2020 paper

    Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, Jianfeng Gao

  4. Text Classification Algorithms: A Survey. arXiv 2019 paper

    Kamran Kowsari, Kiana Jafari Meimandi, Mojtaba Heidarysafa, Sanjana Mendu, Laura E. Barnes, Donald E. Brown

The ML Paper List

Architectures

  1. A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects. arXiv 2020 paper

    Zewen Li, Wenjie Yang, Shouheng Peng, Fan Liu

  2. A Survey of End-to-End Driving: Architectures and Training Methods. arXiv 2020 paper

    Ardi Tampuu, Maksym Semikin, Naveed Muhammad, Dmytro Fishman, Tambet Matiisen

  3. A Survey on Latent Tree Models and Applications. Journal of Artificial Intelligence Research 2013 paper

    Raphael Mourad, Christine Sinoquet, Nevin L. Zhang, Tengfei Liu, Philippe Leray

  4. An Attentive Survey of Attention Models. arXiv 2019 paper

    Sneha Chaudhari, Gungor Polatkan, Rohan Ramanath, Varun Mithal

  5. Binary Neural Networks: A Survey. Pattern Recognition 2020 paper

    Haotong Qin, Ruihao Gong, Xianglong Liu, Xiao Bai, Jingkuan Song, Nicu Sebe

  6. Deep Echo State Network (DeepESN): A Brief Survey. arXiv 2017 paper

    Claudio Gallicchio, Alessio Micheli

  7. Recent Advances in Convolutional Neural Networks. Pattern Recognition 2018 paper

    Jiuxiang Gu, Zhenhua Wang, Jason Kuen, Lianyang Ma, Amir Shahroudy, Bing Shuai, Ting Liu, Xingxing Wang, Gang Wang, Jianfei Cai, Tsuhan Chen

  8. Sum-product networks: A survey. arXiv 2020 paper

    Iago París, Raquel Sánchez-Cauce, Francisco Javier Díez

  9. Survey on the attention based RNN model and its applications in computer vision. arXiv 2016 paper

    Feng Wang, David M. J. Tax

  10. Understanding LSTM -- a tutorial into Long Short-Term Memory Recurrent Neural Networks. arXiv 2019 paper

    Ralf C. Staudemeyer, Eric Rothstein Morris

AutoML

  1. A Comprehensive Survey of Neural Architecture Search: Challenges and Solutions. arXiv 2020 paper

    Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Xiaojiang Chen, Xin Wang

  2. A Survey on Neural Architecture Search. arXiv 2019 paper

    Martin Wistuba, Ambrish Rawat, Tejaswini Pedapati

  3. AutoML: A Survey of the State-of-the-Art. arXiv 2019 paper

    Xin He, Kaiyong Zhao, Xiaowen Chu

  4. Benchmark and Survey of Automated Machine Learning Frameworks. arXiv 2020 paper

    Marc-André Zoller, Marco F. Huber

  5. Neural Architecture Search: A Survey. Journal of Machine Learning Research 2019 paper

    Thomas Elsken, Jan Hendrik Metzen, Frank Hutter

Bayesian Methods

  1. A survey of non-exchangeable priors for Bayesian nonparametric models. IEEE 2015 paper

    Nicholas J. Foti, Sinead Williamson

  2. Bayesian Nonparametric Space Partitions: A Survey. arXiv 2020 paper

    Xuhui Fan, Bin Li, Ling Luo, Scott A. Sisson

  3. Towards Bayesian Deep Learning: A Survey. arXiv 2016 paper

    Hao Wang, Dityan Yeung

Classification Clustering and Regression

  1. A Survey of Classification Techniques in the Area of Big Data. arXiv 2015 paper

    Praful Koturwar, Sheetal Girase, Debajyoti Mukhopadhyay

  2. A Survey of Constrained Gaussian Process Regression: Approaches and Implementation Challenges. arXiv 2020 paper

    Laura P. Swiler, Mamikon Gulian, Ari Frankel, Cosmin Safta, John D. Jakeman

  3. A Survey on Multi-View Clustering. arXiv 2017 paper

    Guoqing Chao, Shiliang Sun, Jinbo Bi

  4. Deep learning for time series classification: a review. arXiv 2019 paper

    Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, Pierre-Alain Muller

  5. How Complex is your classification problem? A survey on measuring classification complexity. ACM 2019 paper

    Ana Carolina Lorena, Luis P F Garcia, Jens Lehmann, Marcilio C P Souto, Tin K Ho

Curriculum Learning

  1. Automatic Curriculum Learning For Deep RL: A Short Survey. arXiv 2020 paper

    Rémy Portelas, Cédric Colas, Lilian Weng, Katja Hofmann, Pierre-Yves Oudeyer

  2. Curriculum Learning for Reinforcement Learning Domains: A Framework and Survey. arXiv 2020 paper

    Sanmit Narvekar, Bei Peng, Matteo Leonetti, Jivko Sinapov, Matthew E. Taylor, Peter Stone

Data Augmentation

  1. A survey on Image Data Augmentation for Deep Learning. Journal of Big Data 2019 paper

    Connor Shorten

  2. Time Series Data Augmentation for Deep Learning: A Survey. arXiv 2020 paper

    Qingsong Wen, Liang Sun, Xiaomin Song, Jingkun Gao, Xue Wang, Huan Xu

Deep Learning - General Methods

  1. A Survey of Neuromorphic Computing and Neural Networks in Hardware. arXiv 2017 paper

    Catherine D. Schuman, Thomas E. Potok, Robert M. Patton, J. Douglas Birdwell, Mark E. Dean, Garrett S. Rose, James S. Plank

  2. A Survey on Deep Hashing Methods. arXiv 2020 paper

    Xiao Luo, Chong Chen, Huasong Zhong, Hao Zhang, Minghua Deng, Jianqiang Huang, Xiansheng Hua

  3. A survey on modern trainable activation functions. arXiv 2020 paper

    Andrea Apicella, Francesco Donnarumma, Francesco Isgrò, Roberto Prevete

  4. Convergence of Edge Computing and Deep Learning: A Comprehensive Survey. IEEE 2020 paper

    Xiaofei Wang, Yiwen Han, Victor C.M. Leung, Dusit Niyato, Xueqiang Yan, Xu Chen

  5. Deep learning. nature 2015 paper

    Yann LeCun

  6. Deep Learning on Graphs: A Survey. IEEE 2018 paper

    Ziwei Zhang, Peng Cui, Wenwu Zhu

  7. Deep Learning Theory Review: An Optimal Control and Dynamical Systems Perspective. arXiv 2019 paper

    Guan-Horng Liu, Evangelos A. Theodorou

  8. Geometric Deep Learning: Going beyond Euclidean data. IEEE 2016 paper

    Michael M. Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, Pierre Vandergheynst

  9. Improving Deep Learning Models via Constraint-Based Domain Knowledge: a Brief Survey. arXiv 2020 paper

    Andrea Borghesi, Federico Baldo, Michela Milano

  10. Review: Ordinary Differential Equations For Deep Learning. arXiv 2019 paper

    Xinshi Chen

  11. Survey of Dropout Methods for Deep Neural Networks. arXiv 2019 paper

    Alex Labach, Hojjat Salehinejad, Shahrokh Valaee

  12. Survey of Expressivity in Deep Neural Networks. arXiv 2016 paper

    Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, Jascha Sohldickstein

  13. Survey of reasoning using Neural networks. arXiv 2017 paper

    Amit Sahu

  14. The Deep Learning Compiler: A Comprehensive Survey. arXiv 2020 paper

    Mingzhen Li, Yi Liu, Xiaoyan Liu, Qingxiao Sun, Xin You, Hailong Yang, Zhongzhi Luan, Depei Qian

  15. The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches. arXiv 2018 paper

    Zahangir Alom, Tarek M Taha, Christopher Yakopcic, Stefan Westberg, Paheding Sidike, Mst Shamima Nasrin, Brian C Van Esesn, Abdul A S Awwal, Vijayan K Asari

  16. Time Series Forecasting With Deep Learning: A Survey. arXiv 2020 paper

    Bryan Lim, Stefan Zohren

Deep Reinforcement Learning

  1. A Brief Survey of Deep Reinforcement Learning. arXiv 2017 paper

    Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, Anil A Bharath

  2. A Short Survey On Memory Based Reinforcement Learning. arXiv 2019 paper

    Dhruv Ramani

  3. A Short Survey on Probabilistic Reinforcement Learning. arXiv 2019 paper

    Reazul Hasan Russel

  4. A Survey of Inverse Reinforcement Learning: Challenges, Methods and Progress. arXiv 2018 paper

    Saurabh Arora, Prashant Doshi

  5. A Survey of Reinforcement Learning Algorithms for Dynamically Varying Environments. arXiv 2020 paper

    Sindhu Padakandla

  6. A Survey of Reinforcement Learning Informed by Natural Language. IJCAI 2019 paper

    Jelena Luketina, Nantas Nardelli, Gregory Farquhar, Jakob N. Foerster, Jacob Andreas, Edward Grefenstette, Shimon Whiteson, Tim Rocktaschel

  7. A Survey of Reinforcement Learning Techniques: Strategies, Recent Development, and Future Directions. arXiv 2020 paper

    Amit Kumar Mondal

  8. A survey on intrinsic motivation in reinforcement learning. arXiv 2019 paper

    Aubret, Arthur, Matignon, Laetitia, Hassas, Salima

  9. A Survey on Reproducibility by Evaluating Deep Reinforcement Learning Algorithms on Real-World Robots. arXiv 2019 paper

    Nicolai A. Lynnerup, Laura Nolling, Rasmus Hasle, John Hallam

  10. Deep Reinforcement Learning: An Overview. arXiv 2017 paper

    Yuxi Li

  11. Feature-Based Aggregation and Deep Reinforcement Learning: A Survey and Some New Implementations. IEEE 2019 paper

    Dimitri P. Bertsekas

Federated Learning

  1. A Survey towards Federated Semi-supervised Learning. FRUCT 2020 paper

    Yilun Jin, Xiguang Wei, Yang Liu, Qiang Yang

  2. Advances and Open Problems in Federated Learning. arXiv 2019 paper

    Peter Kairouz, H Brendan Mcmahan, Brendan Avent, Aurelien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Keith Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, Rafael G L Doliveira, Salim El Rouayheb, David Evans, Josh Gardner, Zachary A Garrett, Adria Gascon, Badih Ghazi, Phillip B Gibbons, Marco Gruteser, Zaid Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konecny, Aleksandra Korolova, Farinaz Koushanfar, Sanmi Koyejo, Tancrede Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Ozgur, Rasmus Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Song, Weikang Song, Sebastian U Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramer, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X Yu, Han Yu, Sen Zhao

  3. Threats to Federated Learning: A Survey. CoRL 2019 2020 paper

    Lingjuan Lyu, Han Yu, Qiang Yang

  4. Towards Utilizing Unlabeled Data in Federated Learning: A Survey and Prospective. arXiv 2020 paper

    Yilun Jin, Xiguang Wei, Yang Liu, Qiang Yang

Few-Shot and Zero-Shot Learning

  1. A Survey of Zero-Shot Learning: Settings, Methods, and Applications. ACM 2019 paper

    Wei Wang,Vincent W. Zheng,Han Yu,Chunyan Miao

  2. Few-shot Learning: A Survey. arXiv 2019 paper

    Yaqing Wang, Quanming Yao

  3. Generalizing from a Few Examples: A Survey on Few-Shot Learning. ACM 2019 paper

    Yaqing Wang, Quanming Yao, James Kwok, Lionel M. Ni

General Machine Learning

  1. A survey of dimensionality reduction techniques. arXiv 2014 paper

    C.O.S. Sorzano, J. Vargas, A. Pascual Montano

  2. A Survey of Predictive Modelling under Imbalanced Distributions. arXiv 2015 paper

    Paula Branco, Luis Torgo, Rita Ribeiro

  3. A Survey on Activation Functions and their relation with Xavier and He Normal Initialization. arXiv 2020 paper

    Leonid Datta

  4. A Survey on Data Collection for Machine Learning: a Big Data -- AI Integration Perspective. arXiv 2018 paper

    Yuji Roh, Geon Heo, Steven Euijong Whang

  5. A survey on feature weighting based K-Means algorithms. Journal of Classification 2016 paper

    Renato Cordeiro de Amorim

  6. A Survey on Graph Kernels. Applied Network Science 2020 paper

    Nils M. Kriege, Fredrik D. Johansson, Christopher Morris

  7. A Survey on Multi-output Learning. IEEE 2019 paper

    Donna Xu, Yaxin Shi, Ivor W. Tsang, Yew-Soon Ong, Chen Gong, Xiaobo Shen

  8. A Survey on Resilient Machine Learning. arXiv 2017 paper

    Atul Kumar, Sameep Mehta

  9. A Survey on Surrogate Approaches to Non-negative Matrix Factorization. Vietnam journal of mathematics 2018 paper

    Pascal Fernsel, Peter Maass

  10. A Tutorial on Network Embeddings. arXiv 2018 paper

    Haochen Chen, Bryan Perozzi, Rami Al-Rfou, Steven Skiena

  11. Adversarial Examples in Modern Machine Learning: A Review. arXiv 2019 paper

    Rey Reza Wiyatno, Anqi Xu, Ousmane Dia, Archy de Berker

  12. Algorithms Inspired by Nature: A Survey. arXiv 2019 paper

    Pranshu Gupta

  13. Deep Tree Transductions - A Short Survey. INNSBDDL 2019 paper

    Davide Bacciu, Antonio Bruno

  14. Graph Representation Learning: A Survey. APSIPA Transactions on Signal and Information Processing 2019 paper

    Fenxiao Chen, Yuncheng Wang, Bin Wang, C.-C. Jay Kuo

  15. Heuristic design of fuzzy inference systems: A review of three decades of research. Engineering Applications of Artificial Intelligence 2019 paper

    Varun Ojha, Ajith Abraham, Vaclav Snasel

  16. Hierarchical Mixtures-of-Experts for Exponential Family Regression Models with Generalized Linear Mean Functions: A Survey of Approximation and Consistency Results. arXiv 2013 paper

    Wenxin Jiang, Martin A. Tanner

  17. Hyperbox based machine learning algorithms: A comprehensive survey. arXiv 2019 paper

    Thanh Tung Khuat, Dymitr Ruta, Bogdan Gabrys

  18. Imbalance Problems in Object Detection: A Review. IEEE 2019 paper

    Kemal Oksuz, Baris Can Cam, Sinan Kalkan, Emre Akbas

  19. Learning Representations of Graph Data -- A Survey. arXiv 2019 paper

    Mital Kinderkhedia

  20. Machine Learning at the Network Edge: A Survey. arXiv 2020 paper

    M.G. Sarwar Murshed, Christopher Murphy, Daqing Hou, Nazar Khan, Ganesh Ananthanarayanan, Faraz Hussain

  21. Machine Learning for Spatiotemporal Sequence Forecasting: A Survey. arXiv 2018 paper

    Xingjian Shi, Dit-Yan Yeung

  22. Machine Learning in Network Centrality Measures: Tutorial and Outlook. Association for Computing Machinery 2018 paper

    Felipe Grando, Lisandro Zambenedetti Granville, Luís C. Lamb

  23. Machine Learning Testing: Survey, Landscapes and Horizons. arXiv 2019 paper

    Jie M. Zhang, Mark Harman, Lei Ma, Yang Liu

  24. Machine Learning with World Knowledge: The Position and Survey. arXiv 2017 paper

    Yangqiu Song, Dan Roth

  25. Mean-Field Learning: a Survey. arXiv 2012 paper

    Hamidou Tembine, Raúl Tempone, Pedro Vilanova

  26. Multi-Objective Multi-Agent Decision Making: A Utility-based Analysis and Survey. Autonomous Agents and Multi-Agent Systems 2020 paper

    Roxana Radulescu, Patrick Mannion, Diederik M. Roijers, Ann Nowé

  27. Narrative Science Systems: A Review. International Journal of Research in Computer Science 2015 paper

    Paramjot Kaur Sarao, Puneet Mittal, Rupinder Kaur

  28. Network Representation Learning: A Survey. IEEE 2020 paper

    Daokun Zhang, Jie Yin, Xingquan Zhu, Chengqi Zhang

  29. Relational inductive biases, deep learning, and graph networks. arXiv 2018 paper

    Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinícius Flores Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, Caglar Gül?ehre, H. Francis Song, Andrew J. Ballard, Justin Gilmer, George E. Dahl, Ashish Vaswani, Kelsey R. Allen, Charles Nash, Victoria Langston, Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet Kohli, Matthew Botvinick, Oriol Vinyals, Yujia Li, Razvan Pascanu

  30. Relational Representation Learning for Dynamic (Knowledge) Graphs: A Survey. JMLR 2019 paper

    Seyed Mehran Kazemi, Rishab Goel, Kshitij Jain, Ivan Kobyzev, Akshay Sethi, Peter Forsyth, Pascal Poupart

  31. Statistical Queries and Statistical Algorithms: Foundations and Applications. arXiv 2020 paper

    Lev Reyzin

  32. Structure Learning of Probabilistic Graphical Models: A Comprehensive Survey. arXiv 2011 paper

    Yang Zhou

  33. Survey on Feature Selection. arXiv 2015 paper

    Tarek Amr Abdallah, Beatriz de La Iglesia

  34. Survey on Five Tribes of Machine Learning and the Main Algorithms. Software Guide 2019 paper

    LI Xu-ran, DING Xiao-hong

  35. Survey: Machine Learning in Production Rendering. arXiv 2020 paper

    Shilin Zhu

  36. The Benefits of Population Diversity in Evolutionary Algorithms: A Survey of Rigorous Runtime Analyses. arXiv 2018 paper

    Dirk Sudholt

  37. Tutorial on Variational Autoencoders. arXiv 2016 paper

    Carl Doersch

  38. Unsupervised Cross-Lingual Representation Learning. ACL 2019 paper

    Sebastian Ruder, Anders Sogaard, Ivan Vulic

  39. Verification for Machine Learning, Autonomy, and Neural Networks Survey. arXiv 2018 paper

    Weiming Xiang, Patrick Musau, Ayana A. Wild, Diego Manzanas Lopez, Nathaniel Hamilton, Xiaodong Yang, Joel Rosenfeld, Taylor T. Johnson

Generative Adversarial Networks

  1. A Review on Generative Adversarial Networks: Algorithms, Theory, and Applications. arXiv 2020 paper

    Jie Gui, Zhenan Sun, Yonggang Wen, Dacheng Tao, Jieping Ye

  2. A Survey on Generative Adversarial Networks: Variants, Applications, and Training. arXiv 2020 paper

    Abdul Jabbar, Xi Li, Bourahla Omar

  3. Generative Adversarial Networks: A Survey and Taxonomy. arXiv 2019 paper

    Zhengwei Wang, Qi She, Tomas E Ward

  4. Generative Adversarial Networks: An Overview. IEEE 2018 paper

    Antonia Creswell, Tom White, Vincent Dumoulin, Kai Arulkumaran, Biswa Sengupta, Anil A Bharath

  5. How Generative Adversarial Nets and its variants Work: An Overview of GAN. arXiv 2017 paper

    Yongjun Hong, Uiwon Hwang, Jaeyoon Yoo, Sungroh Yoon

  6. Stabilizing Generative Adversarial Network Training: A Survey. arXiv 2020 paper

    Maciej Wiatrak, Stefano V. Albrecht, Andrew Nystrom

  7. Stabilizing Generative Adversarial Networks: A Survey. arXiv 2019 paper

    Maciej Wiatrak, Stefano V. Albrecht, Andrew Nystrom

Graph Neural Networks

  1. A Comprehensive Survey on Graph Neural Networks. IEEE 2019 paper

    Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, Philip S. Yu

  2. A Survey on The Expressive Power of Graph Neural Networks. arXiv 2020 paper

    Ryoma Sato

  3. Adversarial Attack and Defense on Graph Data: A Survey. arXiv 2018 paper

    Lichao Sun, Ji Wang, Philip S. Yu, Bo Li

  4. Bridging the Gap between Spatial and Spectral Domains: A Survey on Graph Neural Networks. arXiv 2020 paper

    Zhiqian Chen, Fanglan Chen, Lei Zhang, Taoran Ji, Kaiqun Fu, Liang Zhao, Feng Chen, Chang-Tien Lu

  5. Foundations and modelling of dynamic networks using Dynamic Graph Neural Networks: A survey. arXiv 2020 paper

    Joakim Skarding, Bogdan Gabrys, Katarzyna Musial

  6. Graph embedding techniques, applications, and performance: A survey. Knowledge Based Systems 2017 paper

    Palash Goyal, Emilio Ferrara

  7. Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective. arXiv 2020 paper

    Luis C. Lamb, Artur Garcez, Marco Gori, Marcelo Prates, Pedro Avelar, Moshe Vardi

  8. Graph Neural Networks: A Review of Methods and Applications. arXiv 2018 paper

    Maosong Sun, Zhengyan Zhang, Ganqu Cui, Cheng Yang, Jie Zhou, Zhiyuan Liu

  9. Introduction to Graph Neural Networks. IEEE 2020 paper

    Zhiyuan Liu, Jie Zhou

  10. Tackling Graphical NLP problems with Graph Recurrent Networks. arXiv 2019 paper

    Linfeng Song

Interpretability and Analysis

  1. A Survey Of Methods For Explaining Black Box Models. ACM 2018 paper

    Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, Dino Pedreschi

  2. A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability. arXiv 2018 paper

    Xiaowei Huang

  3. Causal Interpretability for Machine Learning -- Problems, Methods and Evaluation. Sigkdd Explorations 2020 paper

    Raha Moraffah, Mansooreh Karami, Ruocheng Guo, Adrienne Raglin, Huan Liu

  4. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Information Fusion 2020 paper

    Alejandro Barredo Arrieta, Natalia Diazrodriguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gillopez, Daniel Molina, Richard Benjamins, Raja Chatila, Francisco Herrera

  5. Explainable Reinforcement Learning: A Survey. CD-MAKE 2020 2020 paper

    Erika Puiutta, Eric M. S. P. Veith

  6. Foundations of Explainable Knowledge-Enabled Systems. Knowledge Graphs for eXplainable Artificial Intelligence: Foundations, Applications and Challenges/arXiv 2020 paper

    Shruthi Chari

  7. How Generative Adversarial Networks and Their Variants Work: An Overview. ACM 2017 paper

    Yongjun Hong, Uiwon Hwang, Jaeyoon Yoo, Sungroh Yoon

  8. Language (Technology) is Power: A Critical Survey of "Bias" in NLP. Association for Computational Linguistics 2020 paper

    Su Lin Blodgett, Solon Barocas, Hal Daumé III, Hanna Wallach

  9. Survey & Experiment: Towards the Learning Accuracy. arXiv 2010 paper

    Zeyuan Allen Zhu

  10. Understanding Neural Networks via Feature Visualization: A survey. arXiv 2019 paper

    Anh Nguyen, Jason Yosinski, Jeff Clune

  11. Visual interpretability for deep learning: a survey. Journal of Zhejiang University Science C 2018 paper

    Quanshi Zhang, Songchun Zhu

  12. Visualisation of Pareto Front Approximation: A Short Survey and Empirical Comparisons. CEC 2019 paper

    Huiru Gao, Haifeng Nie, Ke Li

Meta Learning

  1. A Comprehensive Overview and Survey of Recent Advances in Meta-Learning. arXiv 2020 paper

    Huimin Peng

  2. Meta-Learning in Neural Networks: A Survey. arXiv 2020 paper

    Timothy M. Hospedales, Antreas Antoniou, Paul Micaelli, Amos J. Storkey

  3. Meta-Learning: A Survey. arXiv 2018 paper

    Joaquin Vanschoren

Metric Learning

  1. A Survey on Metric Learning for Feature Vectors and Structured Data. arXiv 2013 paper

    Aurélien Bellet, Amaury Habrard, Marc Sebban

  2. A Tutorial on Distance Metric Learning: Mathematical Foundations, Algorithms and Experiments. arXiv 2018 paper

    Juan Luis Suárez, Salvador García, Francisco Herrera

ML Applications

  1. A Survey of Adaptive Resonance Theory Neural Network Models for Engineering Applications. Neural Networks 2019 paper

    Leonardo Enzo Brito da Silva, Islam Elnabarawy, Donald C. Wunsch II

  2. A Survey of Machine Learning Methods and Challenges for Windows Malware Classification. arXiv 2020 paper

    Edward Raff, Charles Nicholas

  3. A survey on deep hashing for image retrieval. arXiv 2020 paper

    Xiaopeng Zhang

  4. A Survey on Deep Learning based Brain-Computer Interface: Recent Advances and New Frontiers. arXiv 2019 paper

    Xiang Zhang, Lina Yao, Xianzhi Wang, Jessica J M Monaghan, David Mcalpine, Yu Zhang

  5. A Survey on Deep Learning in Medical Image Analysis. Medical Image Analysis 2017 paper

    Geert J S Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud A A Setio, Francesco Ciompi, Mohsen Ghafoorian, Jeroen A W M Van Der Laak, Bram Van Ginneken, Clara I Sanchez

  6. Artificial Neural Networks-Based Machine Learning for Wireless Networks: A Tutorial. IEEE 2019 paper

    Mingzhe Chen, Ursula Challita, Walid Saad, Changchuan Yin, Mérouane Debbah

  7. How Developers Iterate on Machine Learning Workflows -- A Survey of the Applied Machine Learning Literature. arXiv 2018 paper

    Doris Xin, Litian Ma, Shuchen Song, Aditya G. Parameswaran

  8. Machine Learning Aided Static Malware Analysis: A Survey and Tutorial. arXiv 2018 paper

    Andrii Shalaginov, Sergii Banin, Ali Dehghantanha, Katrin Franke

  9. Machine Learning for Survival Analysis: A Survey. arXiv 2017 paper

    Ping Wang, Yan Li, Chandan K. Reddy

  10. The Creation and Detection of Deepfakes: A Survey. arXiv 2020 paper

    Yisroel Mirsky, Wenke Lee

Model Compression and Acceleration

  1. A Survey of Model Compression and Acceleration for Deep Neural Networks. arXiv 2017 paper

    Yu Cheng, Duo Wang, Pan Zhou, Tao Zhang

  2. A Survey on Methods and Theories of Quantized Neural Networks. arXiv 2018 paper

    Yunhui Guo

  3. An Overview of Neural Network Compression. arXiv 2020 paper

    J O Neill

  4. Knowledge Distillation: A Survey. arXiv 2020 paper

    Jianping Gou, Baosheng Yu, Stephen John Maybank, Dacheng Tao

  5. Pruning Algorithms to Accelerate Convolutional Neural Networks for Edge Applications: A Survey. arXiv 2020 paper

    Jiayi Liu, Samarth Tripathi, Unmesh Kurup, Mohak Shah

Multi-Task and Multi-View Learning

  1. A Brief Review on Multi-Task Learning. Multimedia Tools and Applications 2018 paper

    Kimhan Thung, Chong Yaw Wee

  2. A Survey on Multi-Task Learning. arXiv 2017 paper

    Yu Zhang, Qiang Yang

  3. A Survey on Multi-view Learning. arXiv 2013 paper

    Chang Xu, Dacheng Tao, Chao Xu

  4. An overview of multi-task learning. National Science Review 2018 paper

    Yu Zhang, Qiang Yang

  5. An Overview of Multi-Task Learning in Deep Neural Networks. arXiv 2017 paper

    Sebastian Ruder

  6. Revisiting Multi-Task Learning in the Deep Learning Era. arXiv 2020 paper

    Simon Vandenhende, Stamatios Georgoulis, Marc Proesmans, Dengxin Dai, Luc Van Gool

Online Learning

  1. A Survey of Algorithms and Analysis for Adaptive Online Learning. Journal of Machine Learning Research 2017 paper

    H. Brendan McMahan

  2. Online Learning: A Comprehensive Survey. arXiv 2018 paper

    Steven C.H. Hoi, Doyen Sahoo, Jing Lu, Peilin Zhao

  3. Preference-based Online Learning with Dueling Bandits: A Survey. arXiv 2018 paper

    Robert Busa-Fekete, Eyke Hüllermeier, Adil El Mesaoudi-Paul

Optimization

  1. A Survey of Optimization Methods from a Machine Learning Perspective. arXiv 2019 paper

    Shiliang Sun, Zehui Cao, Han Zhu, Jing Zhao

  2. A Systematic and Meta-analysis Survey of Whale Optimization Algorithm. Comput. Intell. Neurosci. 2019 paper

    Hardi M. Mohammed, Shahla U. Umar, Tarik A. Rashid

  3. An overview of gradient descent optimization algorithms. arXiv 2017 paper

    Sebastian Ruder

  4. Convex Optimization Overview. IEEE 2008 paper

    Nikos Komodakis

  5. Gradient Boosting Machine: A Survey. arXiv 2019 paper

    Zhiyuan He, Danchen Lin, Thomas Lau, Mike Wu

  6. Optimization for deep learning: theory and algorithms. arXiv 2019 paper

    Ruoyu Sun

  7. Optimization Models for Machine Learning: A Survey. arXiv 2019 paper

    Claudio Gambella, Bissan Ghaddar, Joe Naoum-Sawaya

  8. Particle Swarm Optimization: A survey of historical and recent developments with hybridization perspectives. Machine Learning and Knowledge Extraction 2019 paper

    Saptarshi Sengupta, Sanchita Basak, Richard Alan Peters II

Semi-Supervised and Unsupervised Learning

  1. A brief introduction to weakly supervised learning. arXiv 2018 paper

    Zhihua Zhou

  2. A Survey on Semi-Supervised Learning Techniques. arXiv 2014 paper

    V. Jothi Prakash, Dr. L.M. Nithya

  3. Improvability Through Semi-Supervised Learning: A Survey of Theoretical Results. arXiv 2019 paper

    Alexander Mey, Marco Loog

  4. Learning from positive and unlabeled data: a survey. Machine Learning 2020 paper

    Jessa Bekker, Jesse Davis

Transfer Learning

  1. A Comprehensive Survey on Transfer Learning. arXiv 2019 paper

    Fuzhen Zhuang, Zhiyuan Qi, Keyu Duan, Dongbo Xi, Yongchun Zhu, Hengshu Zhu, Hui Xiong, Qing He

  2. A Survey of Unsupervised Deep Domain Adaptation. arXiv 2020 paper

    Garrett Wilson, Diane J. Cook

  3. A Survey on Deep Transfer Learning. ICANN 2018 paper

    Chuanqi Tan, Fuchun Sun, Tao Kong, Wenchang Zhang, Chao Yang, Chunfang Liu

  4. A survey on domain adaptation theory: learning bounds and theoretical guarantees. arXiv 2020 paper

    Ievgen Redko, Emilie Morvant, Amaury Habrard, Marc Sebban, Younès Bennani

  5. Evolution of transfer learning in natural language processing. arXiv 2019 paper

    Aditya Malte, Pratik Ratadiya

  6. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. arXiv 2019 paper

    Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu

  7. Neural Unsupervised Domain Adaptation in NLP---A Survey. arXiv 2020 paper

    Alan Ramponi, Barbara Plank

  8. Transfer Adaptation Learning: A Decade Survey. arXiv 2019 paper

    Lei Zhang

Trustworthy Machine Learning

  1. A Survey on Bias and Fairness in Machine Learning. arXiv 2019 paper

    Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, Aram Galstyan

  2. Differential Privacy and Machine Learning: a Survey and Review. arXiv 2014 paper

    Zhanglong Ji, Zachary C. Lipton, Charles Elkan

  3. Tutorial: Safe and Reliable Machine Learning. arXiv 2019 paper

    Suchi Saria, Adarsh Subbaswamy

Team Members

Ziyang Wang, Nuo Xu, Bei Li, Yinqiao Li, Quan Du, Tong Xiao, and Jingbo Zhu

Please feel free to contact us if you have any questions (wangziyang [at] stumail.neu.edu.cn or libei_neu [at] outlook.com).

We would like to thank the people who have contributed to this project. They are

Xin Zeng, Laohu Wang, Chenglong Wang, Xiaoqian Liu, Xuanjun Zhou, Jingnan Zhang, Yongyu Mu, Zefan Zhou, Yanhong Jiang, Xinyang Zhu, Xingyu Liu, Dong Bi, Ping Xu, Zijian Li, Fengning Tian, Hui Liu, Kai Feng, Yuhao Zhang, Chi Hu, Di Yang, Lei Zheng, Hexuan Chen, Zeyang Wang, Tengbo Liu, Xia Meng, Weiqiao Shan, Shuhan Zhou, Tao Zhou, Runzhe Cao, Yingfeng Luo, Binghao Wei, Wandi Xu, Yan Zhang, Yichao Wang, Mengyu Ma, Zihao Liu

成为VIP会员查看完整内容
0
36

当前关于机器学习方面的资料非常丰富:Andrew NG在Coursera上的机器学习教程、Bishop的《机器学习与模式识别》 和周志华老师的《机器学习》都是非常好的基础教材;Goodfellow等人的《深度学习》是学习深度学习技术的首选资料;MIT、斯坦福等名校的公开课也非常有价值;一些主要会议的Tutorial、keynote也都可以在网上搜索到。然而,在对学生们进行培训的过程中, 我深感这些资料专业性很强,但入门不易。一方面可能是由于语言障碍,另一个主要原因在于机器学习覆盖 面广,研究方向众多,各种新方法层出不穷,初学者往往在各种复杂的名词,无穷无尽的 算法面前产生畏难情绪,导致半途而废。

本书的主体内容是基于该研讨班形成的总结性资料。基于作者的研究背景,这本书很难说 是机器学习领域的专业著作,而是一本学习笔记,是从一个机器学习 技术使用者角度对机器学习知识的一次总结,并加入我们在本领域研究中的一些经验和发现。与其说是一本教材,不如说是一本科普读物, 用轻松活泼的语言和深入浅出的描述为初学者打开机器学习这扇充满魔力的大门。打开大门以后,我们会发现这是个多么让人激动人心的 领域,每天都有新的知识、新的思路、新的方法产生,每天都有令人振奋的成果。我们希望这本书 可以让更多学生、工程师和相关领域的研究者对机器学习产生兴趣,在这片异彩纷呈的海域上找到 属于自己的那颗贝壳。

强烈推荐给所有初学机器学习的人,里面有: 书籍的pdf 课堂视频 课堂slides 各种延伸阅读 MIT等世界名校的slides 学生的学习笔记等

成为VIP会员查看完整内容
0
40

几十年来,研究人员一直在开发操作和分析图像的算法。由此,在许多高级编程语言中出现了一组常见的图像处理工具。虽然用于图像分析的库正在合并为一个通用的工具包,但图像分析语言仍然停滞不前。通常,分析协议的文本描述比执行进程所需的计算机代码消耗更多的空间。此外,文本解释有时是模糊或不完整的。这本书为图像处理领域提供了精确的数学语言。定义的操作符直接对应于标准库例程,极大地促进了数学描述和计算机脚本之间的转换。本文给出了python3的例子。

  • 本文将为图像处理提供一种统一的语言
  • 提供理论基础与伴随Python®脚本,以精确地描述步骤在图像处理应用
  • 通过操作符将介绍脚本和理论之间的联系
  • 所有章节将包含理论,运算符等价物,例子,Python®代码,和练习

https://www.routledge.com/Image-Operators-Image-Processing-in-Python/Kinser/p/book/9781498796187

成为VIP会员查看完整内容
0
40

近年来, 随着海量数据的涌现, 可以表示对象之间复杂关系的图结构数据越来越受到重视并给已有的算法带来了极大的挑战. 图神经网络作为可以揭示深层拓扑信息的模型, 已开始广泛应用于诸多领域,如通信、生命科学和经济金融等. 本文对近几年来提出的图神经网络模型和应用进行综述, 主要分为以下几类:基于空间方法的图神经网络模型、基于谱方法的图神经网络模型和基于生成方法的图神经网络模型等,并提出可供未来进一步研究的问题.

http://engine.scichina.com/publisher/scp/journal/SSM/50/3/10.1360/N012019-00133?slug=fulltext

图是对对象及其相互关系的一种简洁抽象的直观数学表达. 具有相互关系的数据—图结构数据在众多领域普遍存在, 并得到广泛应用. 随着大量数据的涌现, 传统的图算法在解决一些深层次的重要问题, 如节点分类和链路预测等方面有很大的局限性. 图神经网络模型考虑了输入数据的规模、异质性和深层拓扑信息等, 在挖掘深层次有效拓扑信息、 提取数据的关键复杂特征和 实现对海量数据的快速处理等方面, 例如, 预测化学分子的特性 [1]、文本的关系提取 [2,3]、图形图像的结构推理 [4,5]、社交网络的链路预测和节点聚类 [6]、缺失信息的网络补全 [7]和药物的相互作用预测 [8], 显示了令人信服的可靠性能.

图神经网络的概念最早于 2005 年由 Gori 等 [9]提出, 他借鉴神经网络领域的研究成果, 设计了一种用于处理图结构数据的模型. 2009 年, Scarselli 等 [10]对此模型进行了详细阐述. 此后, 陆续有关于图神经网络的新模型及应用研究被提出. 近年来, 随着对图结构数据研究兴趣的不断增加, 图神经网络研究论文数量呈现出快速上涨的趋势, 图神经网络的研究方向和应用领域都得到了很大的拓展.

目前已有一些文献对图神经网络进行了综述. 文献 [11]对图结构数据和流形数据领域的深度学习方法进行了综述, 侧重于将所述各种方法置于一个称为几何深度学习的统一框架之内; 文献[12]将图神经网络方法分为三类: 半监督学习、无监督学习和最新进展, 并根据发展历史对各种方法进行介绍、分析和对比; 文献[13]介绍了图神经网络原始模型、变体和一般框架, 并将图神经网络的应用划分为结构场景、非结构场景和其他场景; 文献[14]提出了一种新的图神经网络分类方法, 重点介绍了图卷积网络, 并总结了图神经网络方法在不同学习任务中的开源代码和基准.

本文将对图神经网络模型的理论及应用进行综述, 并讨论未来的方向和挑战性问题. 与其他综述文献的不同之处在于, 我们给出新的分类标准, 并且介绍图神经网络丰富的应用成果. 本文具体结构如下: 首先介绍三类主要的图神经网络模型, 分别是基于空间方法的图神经网络、基于谱方法的图神经网络和基于生成方法的图神经网络等; 然后介绍模型在节点分类、链路预测和图生成等方面的应用; 最后提出未来的研究方向.

成为VIP会员查看完整内容
图神经网络.pdf
0
35

图神经网络教程 Graph Convolutional Networks Graph Sampling Methods Application and PyTorch Implementation

成为VIP会员查看完整内容
0
39

Andrew Gordon Wilson,纽约大学Courant数学科学研究所和数据科学中心助理教授,曾担任AAAI 2018、AISTATS 2018、UAI 2018、NeurIPS 2018、AISTATS 2019、ICML 2019、UAI 2019、NeurIPS 2019、AAAI 2020、ICLR 2020的区域主席/SPC以及ICML 2019、2020年EXO主席。 个人主页:https://cims.nyu.edu/~andrewgw/

贝叶斯深度学习与概率模型构建

贝叶斯方法的关键区别属性是间隔化,而不是使用单一的权重设置。贝叶斯间隔化尤其可以提高现代深度神经网络的准确性和标度,这些数据通常不充分指定,并可以代表许多引人注目但不同的解决方案。研究表明,深层的综合系统提供了一种有效的近似贝叶斯间隔化机制,并提出了一种相关的方法,在没有显著开销的情况下,通过在吸引 basins 内间隔化来进一步改进预测分布。我们还研究了神经网络权值的模糊分布所隐含的先验函数,从概率的角度解释了这些模型的泛化特性。从这个角度出发,我们解释了一些神秘而又不同于神经网络泛化的结果,比如用随机标签拟合图像的能力,并表明这些结果可以用高斯过程重新得到。我们还表明贝叶斯平均模型减轻了双下降,从而提高了灵活性,提高了单调性能。最后,我们提供了一个贝叶斯角度的调温校正预测分布。

视频地址:https://www.youtube.com/watch?v=E1qhGw8QxqY

成为VIP会员查看完整内容
0
36

这本书的第五版继续讲述如何运用概率论来深入了解真实日常的统计问题。这本书是为工程、计算机科学、数学、统计和自然科学的学生编写的统计学、概率论和统计的入门课程。因此,它假定有基本的微积分知识。

第一章介绍了统计学的简要介绍,介绍了它的两个分支:描述统计学和推理统计学,以及这门学科的简短历史和一些人,他们的早期工作为今天的工作提供了基础。

第二章将讨论描述性统计的主题。本章展示了描述数据集的图表和表格,以及用于总结数据集某些关键属性的数量。

为了能够从数据中得出结论,有必要了解数据的来源。例如,人们常常假定这些数据是来自某个总体的“随机样本”。为了确切地理解这意味着什么,以及它的结果对于将样本数据的性质与整个总体的性质联系起来有什么意义,有必要对概率有一些了解,这就是第三章的主题。本章介绍了概率实验的思想,解释了事件概率的概念,并给出了概率的公理。

我们在第四章继续研究概率,它处理随机变量和期望的重要概念,在第五章,考虑一些在应用中经常发生的特殊类型的随机变量。给出了二项式、泊松、超几何、正规、均匀、伽玛、卡方、t和F等随机变量。

成为VIP会员查看完整内容
1
35

本书由奋战在Python开发一线近20年的Luciano Ramalho执笔,Victor Stinner、Alex Martelli等Python大咖担纲技术审稿人,从语言设计层面剖析编程细节,兼顾Python 3和Python 2,告诉你Python中不亲自动手实践就无法理解的语言陷阱成因和解决之道,教你写出风格地道的Python代码。

● Python数据模型:理解为什么特殊方法是对象行为一致的关键。 ● 数据结构:充分利用内置类型,理解Unicode文本和字节二象性。 ● 把函数视作对象:把Python函数视作一等对象,并了解这一点对流行的设计模式的影响。 ● 面向对象习惯用法:通过构建类学习引用、可变性、接口、运算符重载和多重继承。 ● 控制流程:学习使用上下文管理器、生成器、协程,以及通过concurrent.futures和asyncio包实现的并发。 ● 元编程:理解特性、描述符、类装饰器和元类的工作原理。

成为VIP会员查看完整内容
0
34

深度学习在很多人工智能应用领域中取得成功的关键原因在于,通过复杂的深层网络模型从海量数据中学习丰富的知识。然而,深度学习模型内部高度的复杂性常导致人们难以理解模型的决策结果,造成深度学习模型的不可解释性,从而限制了模型的实际部署。因此,亟需提高深度学习模型的可解释性,使模型透明化,以推动人工智能领域研究的发展。本文旨在对深度学习模型可解释性的研究进展进行系统性的调研,从可解释性原理的角度对现有方法进行分类,并且结合可解释性方法在人工智能领域的实际应用,分析目前可解释性研究存在的问题,以及深度学习模型可解释性的发展趋势。为全面掌握模型可解释性的研究进展以及未来的研究方向提供新的思路。

成为VIP会员查看完整内容
0
35

小样本学习是当前研究关注的热点。这篇论文总结了2016年到2020年的小样本元学习文章,划分为四类:基于数据增强; 基于度量学习,基于元优化; 和基于语义的。值得查看!

摘要:

在图像识别和图像分类等方面,深度神经网络的表现已经超过了人类。然而,随着各种新类别的出现,如何从有限的样本中不断扩大此类网络的学习能力,仍然是一个挑战。像元学习和/或小样本学习这样的技术表现出了良好的效果,他们可以根据先验知识学习或归纳到一个新的类别/任务。在本文中,我们研究了计算机视觉领域中现有的小样本元学习技术的方法和评价指标。我们为这些技术提供了一个分类法,并将它们分类为数据增强、嵌入、优化和基于语义的学习,用于小样本、单样本和零样本设置。然后我们描述在每个类别中所做的重要工作,并讨论他们解决从少数样本中学习的困境的方法。最后,我们在常用的基准测试数据集Omniglot和MiniImagenet上比较了这些技术,并讨论了提高这些技术性能的未来方向,从而达到超越人类的最终目标。

地址: https://www.zhuanzhi.ai/paper/8d29a5f14fcd0cc9a1aa508d072fb328

概述:

基于人工智能(AI)的系统正在成为人类生活的重要组成部分,无论是个人生活还是专业生活。我们周围都是基于人工智能的机器和应用程序,它们将使我们的生活变得更容易。例如,自动邮件过滤(垃圾邮件检测),购物网站推荐,智能手机中的社交网络等[1,2,3,4]。这一令人印象深刻的进展之所以成为可能,是因为机器或深度学习模型[5]取得了突破性的成功。机器或深度学习占据了AI领域的很大一部分。深度学习模型是建立在多层感知器与应用基于梯度的优化技术的能力。深度学习模型最常见的两个应用是:计算机视觉(CV),其目标是教会机器如何像人类一样看和感知事物;自然语言处理(NLP)和自然语言理解(NLU),它们的目标是分析和理解大量的自然语言数据。这些深度学习模型在图像识别[6,7,8]、语音识别[9,10,11,12,13]、自然语言处理与理解[14,15,16,17,18]、视频分析[19,20,21,22,23]、网络安全[24,25,26,27,28,29,30]等领域都取得了巨大的成功。机器和/或深度学习最常见的方法是监督学习,其中针对特定应用程序的大量数据样本与它们各自的标签一起被收集并形成一个数据集。该数据集分为三个部分: 训练、验证和测试。在训练阶段,将训练集和验证集的数据及其各自的标签输入模型,通过反向传播和优化,将模型归纳为一个假设。在测试阶段,将测试数据输入模型,根据导出的假设,模型预测测试数据样本的输出类别。

由于计算机和现代系统的强大能力[31,32],处理大量数据的能力已经非常出色。随着各种算法和模型的进步,深度学习已经能够赶上人类,在某些情况下甚至超过人类。AlphaGo[33]是一个基于人工智能的agent,在没有任何人类指导的情况下训练,能够击败世界围棋冠军。围棋是一种古老的棋盘游戏,被认为比国际象棋[34]复杂10倍;在另一个复杂的多人战略游戏《DOTA》中,AI-agent打败了《DOTA[35]》的人类玩家;对于图像识别和分类的任务,ResNet[6]和Inception[36,37,38]等模型能够在流行的ImageNet数据集上取得比人类更好的性能。ImageNet数据集包括超过1400万张图像,超过1000个类别[39]。

人工智能的最终目标之一是在任何给定的任务中赶上或超过人类。为了实现这一目标,必须尽量减少对大型平衡标记数据集的依赖。当前的模型在处理带有大量标记数据的任务时取得了成功的结果,但是对于其他带有标记数据很少的任务(只有少数样本),各自模型的性能显著下降。对于任何特定任务,期望大型平衡数据集是不现实的,因为由于各种类别的性质,几乎不可能跟上产生的标签数据。此外,生成标记数据集需要时间、人力等资源,而且在经济上可能非常昂贵。另一方面,人类可以快速地学习新的类或类,比如给一张奇怪动物的照片,它可以很容易地从一张由各种动物组成的照片中识别出动物。人类相对于机器的另一个优势是能够动态地学习新的概念或类,而机器必须经过昂贵的离线培训和再培训整个模型来学习新类,前提是要有标签数据可用性。研究人员和开发人员的动机是弥合人类和机器之间的鸿沟。作为这个问题的一个潜在解决方案,我们已经看到元学习[40,41,42,43,44,45,46,47,48,49,50]、小样本学习[51,52,53,54]、低资源学习[55,56,57,58]、零样本学习[59,60,61,62,63,63,64,64,65]等领域的工作在不断增加,这些领域的目标是使模型更好地推广到包含少量标记样本的新任务。

什么是小样本元学习?

在few-shot, low-shot, n-shot learning (n一般在1 - 5之间)中,其基本思想是用大量的数据样本对模型进行多类的训练,在测试过程中,模型会给定一个新的类别(也称为新集合),每个类别都有多个数据样本,一般类别数限制为5个。在元学习中,目标是泛化或学习学习过程,其中模型针对特定任务进行训练,不同分类器的函数用于新任务集。目标是找到最佳的超参数和模型权值,使模型能够轻松适应新任务而不过度拟合新任务。在元学习中,有两类优化同时运行: 一类是学习新的任务; 另一个是训练学习器。近年来,小样本学习和元学习技术引起了人们极大的兴趣。

元学习领域的早期研究工作是Yoshua和Samy Bengio[67]以及Fei-Fei Li在less -shot learning[68]中完成的。度量学习是使用的较老的技术之一,其目标是从嵌入空间中学习。将图像转换为嵌入向量,特定类别的图像聚在一起,而不同类别的图像聚在一起比较远。另一种流行的方法是数据增强,从而在有限的可用样本中产生更多的样本。目前,基于语义的方法被广泛地研究,分类仅仅基于类别的名称及其属性。这种基于语义的方法是为了解决零样本学习应用的启发。

迁移学习与自监督学习

迁移学习的总体目标是从一组任务中学习知识或经验,并将其迁移到类似领域的任务中去[95]。用于训练模型获取知识的任务有大量的标记样本,而迁移任务的标记数据相对较少(也称为微调),这不足以使模型训练和收敛到特定的任务。迁移学习技术的表现依赖于两项任务之间的相关性。在执行迁移学习时,分类层被训练用于新的任务,而模型中先前层的权值保持不变[96]。对于每一个新的任务,在我们进行迁移学习的地方,学习速率的选择和要冻结的层数都必须手工决定。与此相反,元学习技术可以相当迅速地自动适应新的任务。

自监督学习的研究近年来得到了广泛的关注[97,98,99]。自监督学习(SSL)技术的训练基于两个步骤:一是在一个预定义代理任务上进行训练,在大量的未标记数据样本上进行训练;第二,学习到的模型参数用于训练或微调主要下游任务的模型。元学习或小样本学习技术背后的理念与自监督学习非常相似,自监督学习是利用先前的知识,识别或微调一个新的任务。研究表明,自监督学习可以与小样本学习一起使用,以提高模型对新类别的表现[100,101]。

方法体系组织:

元学习、小样本学习、低资源学习、单样本学习、零样本学习等技术的主要目标是通过基于先验知识或经验的迭代训练,使深度学习模型从少量样本中学习能泛化到新类别。先验知识是在包含大量样本的带标签数据集上训练样本,然后利用这些知识在有限样本下识别新的任务而获得的知识。因此,在本文中,我们将所有这些技术结合在了小样本体系下。由于这些技术没有预定义的分类,我们将这些方法分为四大类: 基于数据增强; 基于度量学习,基于元优化; 和基于语义的(如图1所示)。基于数据增强的技术非常流行,其思想是通过扩充最小可用样本和生成更多样化的样本来训练模型来扩展先验知识。在基于嵌入的技术中,数据样本被转换为另一个低级维,然后根据这些嵌入之间的距离进行分类。在基于优化的技术中,元优化器用于在初始训练期间更好地泛化模型,从而可以更好地预测新任务。基于语义的技术是将数据的语义与模型的先验知识一起用于学习或优化新的类别。

成为VIP会员查看完整内容
0
34

【导读】图卷积网络(Graph Convolutional Networks)作为最近几年兴起的一种基于图结构的广义神经网络,因为其独特的计算能力,受到了学术界和工业界的关注与研究。传统深度学习模型如 LSTM 和 CNN在欧式空间中表现不俗,却无法直接应用在非欧式数据上。为此,研究者们通过引入图论中抽象意义上的“图”来表示非欧式空间中的结构化数据,并通过图卷积网络来提取(graph)的拓扑结构,以挖掘蕴藏在图结构数据中的深层次信息。本文结合公式推导详细介绍了图卷积网络(GCN)的前世今生,有助于大家深入了解GCN。

系列教程《GNN-algorithms》

本文为系列教程《GNN-algorithms》的内容,该系列教程不仅会深入介绍GNN的理论基础,还结合了TensorFlow GNN框架tf_geometric对各种GNN模型(GCN、GAT、GIN、SAGPool等)的实现进行了详细地介绍。本系列教程作者王有泽(https://github.com/wangyouze)也是tf_geometric框架的贡献者之一。

系列教程《GNN-algorithms》Github链接: https://github.com/wangyouze/GNN-algorithms
TensorFlow GNN框架tf_geometric的Github链接: https://github.com/CrawlScript/tf_geometric

参考文献:

成为VIP会员查看完整内容
0
33

斯坦福大学机器学习斯坦福大学机器学习第十课“应用机器学习的建议(Advice for applying machine learning)”学习笔记,本次课程主要包括7部分:

  1. Deciding what to try next(决定下一步该如何做)
  2. Evaluating a hypothesis(评估假设)
  3. Model selection and training/validation/test sets(模型选择和训练/验证/测试集)
  4. Diagnosing bias vs. variance(诊断偏差和方差)
  5. Regularization and bias/variance(正则化和偏差/方差)
  6. Learning curves(学习曲线)
  7. Deciding what to try next (revisited)(再次决定下一步该做什么)
成为VIP会员查看完整内容
0
33

在ICML2020 图表示学习论坛上,NUS Xavier Bresson副教授做了关于《图神经网络基准》的报告,非常干活!

论文 :Benchmarking Graph Neural Networks

作者:Vijay Prakash Dwivedi、Chaitanya K. Joshi、Yoshua Bengio 等

论文链接:https://arxiv.org/pdf/2003.00982.pdf

摘要:近期的大量研究已经让我们看到了图神经网络模型(GNN)的强大潜力,很多研究团队都在不断改进和构建基础模块。但大多数研究使用的数据集都很小,如 Cora 和 TU。在这种情况下,即使是非图神经网络的性能也是可观的。如果进行进一步的比较,使用中等大小的数据集,图神经网络的优势才能显现出来。

在斯坦福图神经网络大牛 Jure 等人发布《Open Graph Benchmark》之后,又一个旨在构建「图神经网络的 ImageNet」的研究出现了。近日,来自南洋理工大学、洛约拉马利蒙特大学、蒙特利尔大学和 MILA 等机构的论文被提交到了论文预印版平台上,在该研究中,作者一次引入了六个中等大小的基准数据集(12k-70k 图,8-500 节点),并对一些有代表性的图神经网络进行了测试。除了只用节点特征的基准线模型之外,图神经网络分成带或不带对边对注意力两大类。GNN 研究社区一直在寻求一个共同的基准以对新模型的能力进行评测,这一工具或许可以让我们实现目标。

成为VIP会员查看完整内容
0
35

尽管近年来计算机视觉技术已经取得了长足的进步,但是对于复杂视觉场景 的感知和理解,目前的计算机模型表现还远远没有达到大规模普及和落地应用的 水平。为了充分地利用日常生活中海量的视觉媒体数据,复杂视觉场景的感知和理 解已经逐渐成为计算机视觉领域的一个研究热点。

本文将针对四个不同层次的视觉场景理解(物体级别识别、场景级别识别、场 景级别理解和场景级别推理),逐步地对复杂视觉场景中视觉内容的识别、检测和 推理进行研究。本文的关键技术线路主要聚焦于零样本物体分类、图像场景图生 成、图像描述生成、视频片段检索和视觉问答等具体视觉场景理解任务。在此研究 技术路线下,本文主要的研究内容和贡献如下:

1)针对零样本物体分类模型中普遍存在的语义丢失问题,本文提出一种全新 的零样本学习网络。该网络首次引入两个相互独立的映射网络分支,将图像分类和 图像重建两个原本相互冲突的任务分离出来。同时借助对抗学习,实现重建网络分 支和分类网络分支之间的属性迁移。

2)针对图像场景图生成模型中优化目标通常忽略不同物体的重要性差异的问 题,本文提出一种全新的训练框架,首次将图像场景图生成任务转化成一个多智能 体协同决策问题,从而可以直接将整个图像场景图质量作为模型的优化目标。同 时,本文还提出了一个反事实基准模型,可以有效地计算出每个物体类别预测对整 体场景图生成质量的局部贡献。

3)参考现有的空间注意力机制,本文首次提出通道注意力机制。同时,通过 充分挖掘卷积神经网络的特征图的三个不同维度(空间、通道和层级)之间的联系, 提出一种全新的空间和通道注意力网络。在图像描述生成任务中,该网络不仅极大 地提升了描述语句的生成质量,同时帮助人们理解在语句生成过程中特征图的变 化过程。

4)针对目前视频片段检索任务中两种主流框架(自顶向下和稀疏型自底向上) 的设计缺陷,本文提出了一种全新的密集型自底向上的框架。通过将动作边界定位问题分解成相关性预测和边界回归两个子问题,显著地降低了动作边界定位的难 度。同时,本文提出一个基于图卷积的特征金字塔层,来进一步增强骨干网络编码 能力。

5)针对目前视觉问答模型忽略的两个重要特性(视觉可解释性和问题敏感性), 本文提出了一种通用的反事实样本生成机制。通过遮盖图像中的重要区域或问题 中的重要单词,同时更改标准答案,来合成全新的反事实训练样本。通过使用原始 训练样本和反事实训练样本一起对模型进行训练,迫使视觉问答模型关注被遮盖 的重要内容,提升模型的视觉可解释性和问题敏感性。

地址:

https://zjuchenlong.github.io/

成为VIP会员查看完整内容
0
33
Top