We introduce Image-LoRA, a lightweight parameter efficient fine-tuning (PEFT) recipe for transformer-based vision-language models (VLMs). Image-LoRA applies low-rank adaptation only to the value path of attention layers within the visual-token span, reducing adapter-only training FLOPs roughly in proportion to the visual-token fraction. We further adapt only a subset of attention heads, selected using head influence scores estimated with a rank-1 Image-LoRA, and stabilize per-layer updates via selection-size normalization. Across screen-centric grounding and referring benchmarks spanning text-heavy to image-heavy regimes, Image-LoRA matches or closely approaches standard LoRA accuracy while using fewer trainable parameters and lower adapter-only training FLOPs. The method also preserves the pure-text reasoning performance of VLMs before and after fine-tuning, as further shown on GSM8K.
翻译:本文提出Image-LoRA,一种面向基于Transformer的视觉语言模型(VLMs)的轻量级参数高效微调(PEFT)方案。Image-LoRA仅在视觉标记范围内的注意力层值路径上应用低秩适应,使仅适配器训练的浮点运算量大致按视觉标记比例减少。我们进一步仅微调部分注意力头——这些头通过使用秩为1的Image-LoRA估计的头影响力分数进行筛选,并通过选择规模归一化稳定每层更新。在涵盖从文本密集到图像密集场景的以屏幕为中心的定位与指代表征基准测试中,Image-LoRA在使用更少可训练参数和更低适配器训练浮点运算量的同时,达到或接近标准LoRA的精度。该方法还保持了VLMs在微调前后的纯文本推理性能,这在GSM8K数据集上得到了进一步验证。