The presence of outliers in Large Language Models (LLMs) weights and activations makes them difficult to quantize. Recent work has leveraged rotations to mitigate these outliers. In this work, we propose methods that learn fusible rotations by minimizing principled and cheap proxy objectives to the weight quantization error. We primarily focus on GPTQ as the quantization method. Our main method is OptRot, which reduces weight outliers simply by minimizing the element-wise fourth power of the rotated weights. We show that OptRot outperforms both Hadamard rotations and more expensive, data-dependent methods like SpinQuant and OSTQuant for weight quantization. It also improves activation quantization in the W4A8 setting. We also propose a data-dependent method, OptRot$^{+}$, that further improves performance by incorporating information on the activation covariance. In the W4A4 setting, we see that both OptRot and OptRot$^{+}$ perform worse, highlighting a trade-off between weight and activation quantization.
翻译:大型语言模型(LLMs)权重和激活中存在的异常值使其难以量化。近期研究利用旋转操作来缓解这些异常值。本文提出通过学习可融合旋转的方法,通过最小化权重量化误差的合理且廉价的代理目标来实现。我们主要关注GPTQ作为量化方法。我们的主要方法是OptRot,它通过最小化旋转后权重的逐元素四次方来减少权重异常值。我们证明,在权重量化方面,OptRot优于哈达玛旋转以及更昂贵、数据依赖的方法(如SpinQuant和OSTQuant)。在W4A8设置下,它还能改善激活量化。我们还提出了一种数据依赖方法OptRot$^{+}$,通过结合激活协方差信息进一步提升性能。在W4A4设置下,我们发现OptRot和OptRot$^{+}$的表现均有所下降,这凸显了权重量化与激活量化之间的权衡关系。