Graph Neural Networks (GNNs) present a fundamental hardware challenge by fusing irregular, memory-bound graph traversals with regular, compute-intensive dense matrix operations. While frameworks such as PyTorch Geometric (PyG) and Deep Graph Library (DGL) prioritize high-level usability, they fail to address these divergent execution characteristics. As a result, they rely on generic kernels that suffer from poor cache locality, excessive memory movement, and substantial intermediate allocations. To address these limitations, we present Morphling, a domain-specific code synthesizer designed to bridge this gap. Morphling compiles high-level GNN specifications into portable, backend-specialized implementations targeting OpenMP, CUDA, and MPI. It achieves this by instantiating a library of optimized, architecture-aware primitives tailored to each execution environment. Morphling also incorporates a runtime sparsity-aware execution engine that dynamically selects dense or sparse execution paths using input feature statistics, reducing unnecessary computation on zero-valued entries. We evaluate Morphling on eleven real-world datasets spanning diverse graph structures, feature dimensionalities, and sparsity regimes. Morphling improves per-epoch training throughput by an average of 20X on CPUs, 19X on GPUs, and 6X in distributed settings over PyG and DGL, with peak speedups reaching 66X. Morphling's memory-efficient layouts further reduce peak memory consumption by up to 15X, enabling large-scale GNN training on commodity hardware. These findings demonstrate that specialized, architecture-aware code synthesis provides an effective and scalable path toward high-performance GNN execution across diverse parallel and distributed platforms.
翻译:图神经网络(GNNs)通过将不规则、内存受限的图遍历操作与规则化、计算密集的稠密矩阵运算相融合,提出了一个根本性的硬件挑战。尽管PyTorch Geometric(PyG)和Deep Graph Library(DGL)等框架优先考虑高层易用性,但未能解决这些分化的执行特性。因此,它们依赖通用内核,导致缓存局部性差、内存移动过度以及大量中间分配问题。为应对这些局限,我们提出了Morphling——一个旨在弥合此鸿沟的领域专用代码合成器。Morphling将高层GNN规范编译为针对OpenMP、CUDA和MPI的可移植、后端专用实现。其通过实例化一个为各执行环境定制的、具备架构感知能力的优化原语库来实现这一目标。Morphling还集成了一个运行时稀疏感知执行引擎,该引擎利用输入特征统计量动态选择稠密或稀疏执行路径,从而减少对零值条目的不必要计算。我们在涵盖不同图结构、特征维度和稀疏性分布的十一个真实数据集上评估Morphling。相较于PyG和DGL,Morphling在CPU上实现每轮训练吞吐量平均提升20倍,GPU上提升19倍,分布式环境下提升6倍,峰值加速比达66倍。Morphling的内存高效布局进一步将峰值内存消耗降低高达15倍,使得在商用硬件上进行大规模GNN训练成为可能。这些结果表明,专业化、架构感知的代码合成为跨多样化并行与分布式平台实现高性能GNN执行提供了一条有效且可扩展的路径。