神经结构学习(NSL)是由谷歌推出的一套开源框架,负责利用结构化信号训练深度神经网络。它能够实现神经图学习,使得开发人员得以利用图表训练神经网络。这些图表可以来自多种来源,例如知识图谱、医疗记录、基因组数据或者多模关系(例如图像 - 文本对)等。NSL 还可延伸至对抗学习领域,其中各输入实例间的结构以对抗性扰动方式动态构建而成。

Github: https://github.com/tensorflow/neural-structured-learning

成为VIP会员查看完整内容
Neural Structured Learning in TensorFlow.pdf
0
12

相关内容

神经结构学习(NSL)是由谷歌推出的一套开源框架,负责利用结构化信号训练深度神经网络。它能够实现神经图学习,使得开发人员得以利用图表训练神经网络。这些图表可以来自多种来源,例如知识图、医疗记录、基因组数据或者多模关系(例如图像 - 文本对)等。NSL 还可延伸至对抗学习领域,其中各输入实例间的结构以对抗性扰动方式动态构建而成。

题目: Convolutional Kernel Networks for Graph-Structured Data

摘要:

本文介绍了一系列多层图核,并在图卷积神经网络和核方法之间建立了新的联系。该方法通过将图表示为一系列内核特征图来概括卷积核网络以绘制结构化数据图,其中每个节点都承载有关局部图子结构的信息。一方面,内核的观点提供了一种无监督,表达性强且易于调整的数据表示形式,这在有限样本可用时非常有用。另一方面,我们的模型也可以在大规模数据上进行端到端训练,从而产生新类型的图卷积神经网络。并且证明了该方法在几种图形分类基准上均具有竞争优势,同时提供了简单的模型解释。

成为VIP会员查看完整内容
0
35

简介:

学习如何解决具有挑战性的机器学习问题与TensorFlow,谷歌的革命性的新软件库的深度学习。如果你有一些基本的线性代数和微积分的背景,这本实用的书介绍了机器学习的基本原理,通过向你展示如何设计系统来检测图像中的物体,理解文本,分析视频,并预测潜在药物的性能。

TensorFlow深度学习通过实际例子来教授概念,帮助你从头开始建立深度学习的基础知识。它是具有设计软件系统经验的实践开发人员的理想选择,对于熟悉脚本但不一定熟悉设计学习算法的科学家和其他专业人员也很有用。

目录:

  • 深度学习介绍
  • 介绍TensorFlow原语
  • 编写更好的函数和类
  • 线性和逻辑回归与TensorFlow
  • 完全连接的深层网络
  • Hyperparameter优化
  • 卷积神经网络
  • 循环神经网络
  • 强化学习
  • 训练大型深度网络
  • 深度学习展望

作者:

Reza Bosagh Zadeh是Matroid的创始人兼首席执行官,也是斯坦福大学的兼职教授。他的工作重点是机器学习、分布式计算和离散应用数学。他曾在Databricks的技术咨询委员会任职,自2005年在谷歌的人工智能研究团队工作以来一直致力于人工智能研究。他的奖项包括KDD最佳论文奖和斯坦福大学Gene Golub杰出论文奖。个人主页:http://stanford.edu/~rezab/

成为VIP会员查看完整内容
0
31

Graph Neural Networks (GNNs), which generalize deep neural networks to graph-structured data, have drawn considerable attention and achieved state-of-the-art performance in numerous graph related tasks. However, existing GNN models mainly focus on designing graph convolution operations. The graph pooling (or downsampling) operations, that play an important role in learning hierarchical representations, are usually overlooked. In this paper, we propose a novel graph pooling operator, called Hierarchical Graph Pooling with Structure Learning (HGP-SL), which can be integrated into various graph neural network architectures. HGP-SL incorporates graph pooling and structure learning into a unified module to generate hierarchical representations of graphs. More specifically, the graph pooling operation adaptively selects a subset of nodes to form an induced subgraph for the subsequent layers. To preserve the integrity of graph's topological information, we further introduce a structure learning mechanism to learn a refined graph structure for the pooled graph at each layer. By combining HGP-SL operator with graph neural networks, we perform graph level representation learning with focus on graph classification task. Experimental results on six widely used benchmarks demonstrate the effectiveness of our proposed model.

0
10
下载
预览

In this paper, we investigate the challenges of using reinforcement learning agents for question-answering over knowledge graphs for real-world applications. We examine the performance metrics used by state-of-the-art systems and determine that they are inadequate for such settings. More specifically, they do not evaluate the systems correctly for situations when there is no answer available and thus agents optimized for these metrics are poor at modeling confidence. We introduce a simple new performance metric for evaluating question-answering agents that is more representative of practical usage conditions, and optimize for this metric by extending the binary reward structure used in prior work to a ternary reward structure which also rewards an agent for not answering a question rather than giving an incorrect answer. We show that this can drastically improve the precision of answered questions while only not answering a limited number of previously correctly answered questions. Employing a supervised learning strategy using depth-first-search paths to bootstrap the reinforcement learning algorithm further improves performance.

0
5
下载
预览

Deep structured models are widely used for tasks like semantic segmentation, where explicit correlations between variables provide important prior information which generally helps to reduce the data needs of deep nets. However, current deep structured models are restricted by oftentimes very local neighborhood structure, which cannot be increased for computational complexity reasons, and by the fact that the output configuration, or a representation thereof, cannot be transformed further. Very recent approaches which address those issues include graphical model inference inside deep nets so as to permit subsequent non-linear output space transformations. However, optimization of those formulations is challenging and not well understood. Here, we develop a novel model which generalizes existing approaches, such as structured prediction energy networks, and discuss a formulation which maintains applicability of existing inference techniques.

0
4
下载
预览

Most previous work on neural text generation from graph-structured data relies on standard sequence-to-sequence methods. These approaches linearise the input graph to be fed to a recurrent neural network. In this paper, we propose an alternative encoder based on graph convolutional networks that directly exploits the input structure. We report results on two graph-to-sequence datasets that empirically show the benefits of explicitly encoding the input graph structure.

0
6
下载
预览

Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs. Independently reducing the sizes of basic structures can result in inconsistent dimensions among them, and consequently, end up with invalid LSTM units. To overcome the problem, we propose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS will simultaneously decrease the sizes of all basic structures by one and thereby always maintain the dimension consistency. By learning ISS within LSTM units, the obtained LSTMs remain regular while having much smaller basic structures. Based on group Lasso regularization, our method achieves 10.59x speedup without losing any perplexity of a language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2.69M weights for machine Question Answering of SQuAD dataset. Our approach is successfully extended to non- LSTM RNNs, like Recurrent Highway Networks (RHNs). Our source code is publicly available at https://github.com/wenwei202/iss-rnns

0
4
下载
预览
小贴士
相关VIP内容
专知会员服务
55+阅读 · 2020年4月24日
《动手学深度学习》(Dive into Deep Learning)PyTorch实现
专知会员服务
80+阅读 · 2019年12月31日
斯坦福&谷歌Jeff Dean最新Nature论文:医疗深度学习技术指南
相关资讯
相关论文
Zhen Zhang,Jiajun Bu,Martin Ester,Jianfeng Zhang,Chengwei Yao,Zhi Yu,Can Wang
10+阅读 · 2019年11月14日
Fréderic Godin,Anjishnu Kumar,Arpit Mittal
5+阅读 · 2019年4月3日
Paul Groth,Antony Scerri,Ron Daniel, Jr.,Bradley P. Allen
3+阅读 · 2018年11月16日
Colin Graber,Ofer Meshi,Alexander Schwing
4+阅读 · 2018年11月1日
Bo-Jian Hou,Zhi-Hua Zhou
18+阅读 · 2018年10月25日
Diego Marcheggiani,Laura Perez-Beltrachini
6+阅读 · 2018年10月23日
A Survey on Deep Transfer Learning
Chuanqi Tan,Fuchun Sun,Tao Kong,Wenchang Zhang,Chao Yang,Chunfang Liu
10+阅读 · 2018年8月6日
Learning Conditioned Graph Structures for Interpretable Visual Question Answering
Will Norcliffe-Brown,Efstathios Vafeias,Sarah Parisot
5+阅读 · 2018年7月5日
Wei Wen,Yuxiong He,Samyam Rajbhandari,Minjia Zhang,Wenhan Wang,Fang Liu,Bin Hu,Yiran Chen,Hai Li
4+阅读 · 2018年1月30日
Top
微信扫码咨询专知VIP会员