As large language models (LLMs) continue to grow in size, distributed inference has become increasingly important. Model-parallel strategies must now efficiently scale not only across multiple GPUs but also across multiple nodes. In this work, we present a detailed performance study of multi-node distributed inference using LLMs on GPU-based supercomputers. We conduct experiments with several state-of-the-art inference engines alongside YALIS, a research-oriented prototype engine designed for controlled experimentation. We analyze the strong-scaling behavior of different model-parallel schemes and identify key bottlenecks. Since all-reduce operations are a common performance bottleneck, we develop NVRAR, a hierarchical all-reduce algorithm based on recursive doubling with NVSHMEM. NVRAR achieves up to 1.9x-3.6x lower latency than NCCL for message sizes between 128 KB and 2 MB on HPE Slingshot and InfiniBand interconnects. Integrated into YALIS, NVRAR achieves up to a 1.72x reduction in end-to-end batch latency for the Llama 3.1 405B model in multi-node decode-heavy workloads using tensor parallelism.
翻译:随着大语言模型(LLM)规模的持续增长,分布式推理变得日益重要。模型并行策略不仅需要高效地跨多个GPU扩展,还必须跨多个节点进行扩展。本研究基于GPU超级计算机,对大语言模型的多节点分布式推理进行了详细的性能分析。我们使用多个先进推理引擎以及YALIS(一个专为受控实验设计的研究型原型引擎)开展实验,分析了不同模型并行方案的强扩展行为,并识别出关键瓶颈。鉴于全归约操作是常见的性能瓶颈,我们开发了NVRAR,一种基于递归加倍并利用NVSHMEM的分层全归约算法。在HPE Slingshot和InfiniBand互连上,对于128 KB至2 MB的消息大小,NVRAR的延迟比NCCL降低了1.9倍至3.6倍。将NVRAR集成到YALIS后,在使用张量并行处理多节点解码密集型工作负载时,Llama 3.1 405B模型的端到端批次延迟最高可降低1.72倍。