Large language models (LLMs) deliver impressive performance but incur prohibitive memory and compute costs at deployment. Model pruning is an effective way to reduce these overheads, yet existing approaches face challenges: unstructured sparsity, where nonzeros can appear anywhere, preserves accuracy but yields irregular access patterns that prevent GPU acceleration, while semi-structured 2:4 sparsity is hardware-friendly but enforces a rigid 50% pattern that degrades model quality. To bridge this gap, we introduce PATCH, a hybrid sparsity framework that enables a continuous sparsity ratio between 0% and 50%. PATCH partitions weight matrices into tiles, assigning each tile to be either dense or 2:4 sparse via a learnable mask selection mechanism. This design provides fine-grained control over accuracy-acceleration tradeoffs and supports non-uniform sparsity across layers, leading to superior overall quality. Across models from 0.5B to 8B parameters, PATCH consistently narrows the gap to dense accuracy while delivering practical speedups. For instance, on LLaMA-2 7B with an A6000 GPU, PATCH achieves 1.18x-1.38x end-to-end speedup over dense baselines while improving accuracy by 0.37%-2.96% compared to the state-of-the-art 2:4 pruning method, MaskLLM.
翻译:大语言模型(LLM)展现出卓越的性能,但在部署时会产生高昂的内存与计算开销。模型剪枝是降低此类开销的有效途径,然而现有方法面临以下挑战:非结构化稀疏允许非零元素任意分布,虽能保持精度但会产生不规则的访存模式,导致无法利用GPU加速;而半结构化2:4稀疏虽具备硬件友好性,但其强制的50%稀疏模式会损害模型质量。为弥合这一差距,本文提出PATCH——一种支持0%至50%连续稀疏率的混合稀疏框架。PATCH将权重矩阵划分为多个分块,通过可学习的掩码选择机制为每个分块分配稠密或2:4稀疏模式。该设计实现了精度与加速效果的细粒度权衡控制,并支持跨层非均匀稀疏分布,从而获得更优的整体性能。在0.5B至8B参数规模的模型实验中,PATCH在保持接近稠密模型精度的同时实现了实际加速效果。以LLaMA-2 7B模型与A6000 GPU为例,PATCH相比稠密基线获得1.18倍至1.38倍的端到端加速,同时较当前最优的2:4剪枝方法MaskLLM提升0.37%至2.96%的准确率。