Graph convolution operators bring the advantages of deep learning to a variety of graph and mesh processing tasks previously deemed out of reach. With their continued success comes the desire to design more powerful architectures, often by adapting existing deep learning techniques to non-Euclidean data. In this paper, we argue geometry should remain the primary driving force behind innovation in the emerging field of geometric deep learning. We relate graph neural networks to widely successful computer graphics and data approximation models: radial basis functions (RBFs). We conjecture that, like RBFs, graph convolution layers would benefit from the addition of simple functions to the powerful convolution kernels. We introduce affine skip connections, a novel building block formed by combining a fully connected layer with any graph convolution operator. We experimentally demonstrate the effectiveness of our technique and show the improved performance is the consequence of more than the increased number of parameters. Operators equipped with the affine skip connection markedly outperform their base performance on every task we evaluated, i.e., shape reconstruction, dense shape correspondence, and graph classification. We hope our simple and effective approach will serve as a solid baseline and help ease future research in graph neural networks.
翻译:图形化操作器将深层次学习的优势带给先前认为无法触及的各种图形和网状处理任务。 由于它们继续取得成功, 想要设计更强大的结构, 通常是通过将现有的深层次学习技术改造到非欧元数据。 在本文中, 我们主张, 几何应该仍然是新兴的几何深层学习领域创新背后的主要动力。 我们把图形神经网络与广泛成功的计算机图形和数据近似模型联系起来: 辐射基础功能 。 我们推测, 像 RBFs 一样, 图形变形层会从强大的电动内核添加简单功能中受益。 我们引入松动连接连接连接, 这是将一个完全连接的层与任何图形变形操作器结合起来形成的新建筑块。 我们实验性地展示了我们技术的实效, 并展示了改进的性能不仅仅是参数数量增加的结果。 配备电离子连接的操作器将明显超出我们所评估的每一项任务的基础性能, 例如, 形状重建, 密质成通信和图形分类。 我们希望我们简单而有效的方法将会成为一个坚实的基线和清晰的图表网络。 我们希望, 。