Serving foundation model inference is a pivotal component of contemporary AI applications, where this service is usually hosted in a centralized data center on a group of homogeneous high-performance GPUs. In this paper, we explore how to deploy such a service in a heterogeneous environment in terms of both computation capacity and network connection as an alternative to reduce the high inference cost. We propose HexGen, a distributed inference engine that supports asymmetric partitioning of the inference computation according to tensor model parallelism and pipeline parallelism. HexGen can be deployed with a set of different GPUs connected by a fully heterogeneous network, where the key technique contribution is a scheduling algorithm that allocates the asymmetric inference tasklets among these GPUs connected by different networks. We define the scheduling problem as a constrained optimization problem and further propose an efficient evolutionary algorithm to find the optimal allocation strategy. We conduct an extensive empirical study to evaluate the efficiency of HexGen by serving the state-of-the-art Llama-2 (70B) model. The experimental results suggest that HexGen can choose to achieve up to 2.3 times lower latency deadlines or tolerate up to 4 times more traffic request rates compared with the homogeneous baseline given the same budget. Our implementation is available at https://github.com/Relaxed-System-Lab/HexGen.
翻译:暂无翻译