For Human Action Recognition tasks (HAR), 3D Convolutional Neural Networks have proven to be highly effective, achieving state-of-the-art results. This study introduces a novel streaming architecture based toolflow for mapping such models onto FPGAs considering the model's inherent characteristics and the features of the targeted FPGA device. The HARFLOW3D toolflow takes as input a 3D CNN in ONNX format and a description of the FPGA characteristics, generating a design that minimizes the latency of the computation. The toolflow is comprised of a number of parts, including i) a 3D CNN parser, ii) a performance and resource model, iii) a scheduling algorithm for executing 3D models on the generated hardware, iv) a resource-aware optimization engine tailored for 3D models, v) an automated mapping to synthesizable code for FPGAs. The ability of the toolflow to support a broad range of models and devices is shown through a number of experiments on various 3D CNN and FPGA system pairs. Furthermore, the toolflow has produced high-performing results for 3D CNN models that have not been mapped to FPGAs before, demonstrating the potential of FPGA-based systems in this space. Overall, HARFLOW3D has demonstrated its ability to deliver competitive latency compared to a range of state-of-the-art hand-tuned approaches being able to achieve up to 5$\times$ better performance compared to some of the existing works.
翻译:对于人类动作识别任务(HAR),已经证明3D卷积神经网络非常有效,取得了最先进的结果。本研究介绍了一种新型的基于流式架构的工具流,用于将这样的模型映射到FPGA上,考虑到模型的固有特性和目标FPGA设备的特点。HARFLOW3D工具流以ONNX格式的3D CNN和FPGA特性的描述作为输入,生成最小化计算延迟的设计。该工具流由许多部分组成,包括i)一个3D CNN解析器,ii)性能和资源模型,iii)用于在生成的硬件上执行3D模型的调度算法,iv)为3D模型量身定制的资源感知优化引擎,v)一种自动映射到可合成代码的FPGA。通过各种关于不同3D CNN和FPGA系统的实验,展示了工具流支持广泛模型和设备的能力。此外,该工具流还为以前没有被映射到FPGA的3D CNN模型产生了高性能结果,展示了FPGA系统在这个领域的潜力。总体而言,HARFLOW3D已经证明了它能够提供与一系列最先进的手动调整方法相竞争的延迟能力,能够比现有工作中的某些工作达到高达5倍的性能。