The AI hardware boom has led modern data centers to adopt HPC-style architectures centered on distributed, GPU-centric computation. Large GPU clusters interconnected by fast RDMA networks and backed by high-bandwidth NVMe storage enable scalable computation and rapid access to storage-resident data. Tensor computation runtimes (TCRs), such as PyTorch, originally designed for AI workloads, have recently been shown to accelerate analytical workloads. However, prior work has primarily considered settings where the data fits in aggregated GPU memory. In this paper, we systematically study how TCRs can support scalable, distributed query processing for large-scale, storage-resident OLAP workloads. Although TCRs provide abstractions for network and storage I/O, naive use often underutilizes GPU and I/O bandwidth due to insufficient overlap between computation and data movement. As a core contribution, we present PystachIO, a PyTorch-based distributed OLAP engine that combines fast network and storage I/O with key optimizations to maximize GPU, network, and storage utilization. Our evaluation shows up to 3x end-to-end speedups over existing distributed GPU-based query processing approaches.
翻译:人工智能硬件热潮推动现代数据中心采用以分布式、GPU为中心计算的高性能计算(HPC)架构。通过快速RDMA网络互连的大型GPU集群,配合高带宽NVMe存储,实现了可扩展计算与存储驻留数据的快速访问。张量计算运行时(TCRs,如PyTorch)最初为AI工作负载设计,近期研究证明其能加速分析型工作负载。然而,现有研究主要关注数据可容纳于聚合GPU内存的场景。本文系统性地研究了TCRs如何支持大规模、存储驻留OLAP工作负载的可扩展分布式查询处理。尽管TCRs提供了网络与存储I/O抽象,但直接使用常因计算与数据移动间重叠不足,导致GPU与I/O带宽利用率低下。作为核心贡献,我们提出PystachIO——一个基于PyTorch的分布式OLAP引擎,融合快速网络与存储I/O及关键优化技术,以最大化GPU、网络与存储利用率。实验评估表明,相较于现有分布式GPU查询处理方法,本系统可实现最高3倍的端到端加速。