Graph Neural Networks (GNNs) have exploded onto the machine learning scene in recent years owing to their capability to model and learn from graph-structured data. Such an ability has strong implications in a wide variety of fields whose data is inherently relational, for which conventional neural networks do not perform well. Indeed, as recent reviews can attest, research in the area of GNNs has grown rapidly and has lead to the development of a variety of GNN algorithm variants as well as to the exploration of groundbreaking applications in chemistry, neurology, electronics, or communication networks, among others. At the current stage of research, however, the efficient processing of GNNs is still an open challenge for several reasons. Besides of their novelty, GNNs are hard to compute due to their dependence on the input graph, their combination of dense and very sparse operations, or the need to scale to huge graphs in some applications. In this context, this paper aims to make two main contributions. On the one hand, a review of the field of GNNs is presented from the perspective of computing. This includes a brief tutorial on the GNN fundamentals, an overview of the evolution of the field in the last decade, and a summary of operations carried out in the multiple phases of different GNN algorithm variants. On the other hand, an in-depth analysis of current software and hardware acceleration schemes is provided, from which a hardware-software, graph-aware, and communication-centric vision for GNN accelerators is distilled.
翻译:近些年来,由于有能力建模和从图表结构的数据中学习,神经网络(GNN)在机器学习领域爆炸。这种能力在数据本身具有内在关联性、传统神经网络运作不善的广泛领域具有强烈影响。事实上,正如最近的审查可以证明的那样,GNN的领域的研究迅速发展,导致各种GNN算变量的开发,以及化学、神经学、电子或通信网络等领域开拓应用的探索。然而,在研究的目前阶段,全球NNNP的高效处理仍是一个开放的挑战。除了其新颖性外,GNN很难计算,因为其依赖输入图,其密集和非常稀少的操作组合,或在某些应用中需要缩放巨大的图表。在这方面,本文件旨在作出两项主要贡献。一方面,从计算机角度对GNNNN的域网领域进行审查,但从GNN的当前硬度的深度分析中简要地、从G的硬度分析的硬性硬性硬性硬性硬性硬性直径到十年的实地分析。