Deploying large language models (LLMs) on embedded devices remains a significant research challenge due to the high computational and memory demands of LLMs and the limited hardware resources available in such environments. While embedded FPGAs have demonstrated performance and energy efficiency in traditional deep neural networks, their potential for LLM inference remains largely unexplored. Recent efforts to deploy LLMs on FPGAs have primarily relied on large, expensive cloud-grade hardware and have only shown promising results on relatively small LLMs, limiting their real-world applicability. In this work, we present Hummingbird, a novel FPGA accelerator designed specifically for LLM inference on embedded FPGAs. Hummingbird is smaller, targeting embedded FPGAs such as the KV260 and ZCU104 with 67% LUT, 39% DSP, and 42% power savings over existing research. Hummingbird is stronger, targeting LLaMA3-8B and supporting longer contexts, overcoming the typical 4GB memory constraint of embedded FPGAs through offloading strategies. Finally, Hummingbird is faste, achieving 4.8 tokens/s and 8.6 tokens/s for LLaMA3-8B on the KV260 and ZCU104 respectively, with 93-94% model bandwidth utilization, outperforming the prior 4.9 token/s for LLaMA2-7B with 84% bandwidth utilization baseline. We further demonstrate the viability of industrial applications by deploying Hummingbird on a cost-optimized Spartan UltraScale FPGA, paving the way for affordable LLM solutions at the edge.
翻译:在嵌入式设备上部署大语言模型(LLM)仍然是一个重大的研究挑战,这源于LLM的高计算与内存需求与嵌入式环境中有限的硬件资源之间的矛盾。尽管嵌入式FPGA在传统深度神经网络中已展现出性能和能效优势,但其在LLM推理方面的潜力在很大程度上尚未被探索。近期在FPGA上部署LLM的努力主要依赖于大型、昂贵的云级硬件,并且仅在相对较小的LLM上显示出有希望的结果,这限制了其实际应用。在本工作中,我们提出了蜂鸟(Hummingbird),一种专为嵌入式FPGA上的LLM推理设计的新型FPGA加速器。蜂鸟更小,它面向KV260和ZCU104等嵌入式FPGA,与现有研究相比,实现了67%的LUT、39%的DSP资源节省以及42%的功耗降低。蜂鸟更强,它面向LLaMA3-8B并支持更长的上下文,通过卸载策略克服了嵌入式FPGA通常面临的4GB内存限制。最后,蜂鸟更快,在KV260和ZCU104上分别对LLaMA3-8B实现了4.8 tokens/s和8.6 tokens/s的推理速度,模型带宽利用率达到93-94%,优于先前基于LLaMA2-7B、带宽利用率为84%、速度为4.9 tokens/s的基线。我们进一步通过将蜂鸟部署在成本优化的Spartan UltraScale FPGA上,展示了其工业应用的可行性,为在边缘侧实现经济高效的LLM解决方案铺平了道路。