1、韩家炜,美国伊利诺伊大学香槟分校计算机系教授,IEEE和ACM院士,美国信息网络学术研究中心主任。曾担任KDD、SDM和ICDM等国际知名会议的程序委员会主席,创办了ACM TKDD学报并任主编。在数据挖掘、数据库和信息网络领域发表论文600余篇。韩家炜主页 韩家炜dblp
题目: Octet: Online Catalog Taxonomy Enrichment with Self-Supervision
简介:
分类法在各个领域都有广泛的应用,特别是在在线项目分类、浏览和搜索方面。尽管在线目录分类法的使用很普遍,但实际上大多数分类法都是由人类维护的,这是劳动密集型的,难以扩展。虽然从零开始的分类学构建在文献中得到了大量的研究,但是如何有效地丰富现有的不完全分类学仍然是一个开放而重要的研究问题。分类法的丰富性不仅要求对新出现的术语具有健壮性,而且要求现有分类法结构与新术语附件之间的一致性。在本文中,我们提出了一个自我监督的端到端框架Octet,用于在线目录分类法的丰富。Octet利用联机目录分类法独有的异构信息,例如用户查询、项及其与分类法节点的关系,而不需要除现有分类法以外的其他监督。提出了一种用于术语提取的序列标记模型,并利用图神经网络(GNNs)来捕获术语连接的分类结构和查询项分类交互。在不同的在线领域进行的大量实验表明,通过自动和人工评估,Octet方法优于最新的方法。值得注意的是,Octet丰富了生产中的在线目录分类法,使其在开放世界评估中的规模增加了2倍。
The quest of `can machines think' and `can machines do what human do' are quests that drive the development of artificial intelligence. Although recent artificial intelligence succeeds in many data intensive applications, it still lacks the ability of learning from limited exemplars and fast generalizing to new tasks. To tackle this problem, one has to turn to machine learning, which supports the scientific study of artificial intelligence. Particularly, a machine learning problem called Few-Shot Learning (FSL) targets at this case. It can rapidly generalize to new tasks of limited supervised experience by turning to prior knowledge, which mimics human's ability to acquire knowledge from few examples through generalization and analogy. It has been seen as a test-bed for real artificial intelligence, a way to reduce laborious data gathering and computationally costly training, and antidote for rare cases learning. With extensive works on FSL emerging, we give a comprehensive survey for it. We first give the formal definition for FSL. Then we point out the core issues of FSL, which turns the problem from "how to solve FSL" to "how to deal with the core issues". Accordingly, existing works from the birth of FSL to the most recent published ones are categorized in a unified taxonomy, with thorough discussion of the pros and cons for different categories. Finally, we envision possible future directions for FSL in terms of problem setup, techniques, applications and theory, hoping to provide insights to both beginners and experienced researchers.
Along with the great success of deep neural networks, there is also growing concern about their black-box nature. The interpretability issue affects people's trust on deep learning systems. It is also related to many ethical problems, e.g., algorithmic discrimination. Moreover, interpretability is a desired property for deep networks to become powerful tools in other research fields, e.g., drug discovery and genomics. In this survey, we conduct a comprehensive review of the neural network interpretability research. We first clarify the definition of interpretability as it has been used in many different contexts. Then we elaborate on the importance of interpretability and propose a novel taxonomy organized along three dimensions: type of engagement (passive vs. active interpretation approaches), the type of explanation, and the focus (from local to global interpretability). This taxonomy provides a meaningful 3D view of distribution of papers from the relevant literature as two of the dimensions are not simply categorical but allow ordinal subcategories. Finally, we summarize the existing interpretability evaluation methods and suggest possible research directions inspired by our new taxonomy.
Along with the great success of deep neural networks, there is also growing concern about their black-box nature. The interpretability issue affects people's trust on deep learning systems. It is also related to many ethical problems, e.g., algorithmic discrimination. Moreover, interpretability is a desired property for deep networks to become powerful tools in other research fields, e.g., drug discovery and genomics. In this survey, we conduct a comprehensive review of the neural network interpretability research. We first clarify the definition of interpretability as it has been used in many different contexts. Then we elaborate on the importance of interpretability and propose a novel taxonomy organized along three dimensions: type of engagement (passive vs. active interpretation approaches), the type of explanation, and the focus (from local to global interpretability). This taxonomy provides a meaningful 3D view of distribution of papers from the relevant literature as two of the dimensions are not simply categorical but allow ordinal subcategories. Finally, we summarize the existing interpretability evaluation methods and suggest possible research directions inspired by our new taxonomy.