Low-rank adaptation (LoRA) has achieved remarkable success in fine-tuning pre-trained vision transformers for various downstream tasks. Existing studies mainly focus on exploring more parameter-efficient strategies or more effective representation learning schemes. However, these methods either sacrifice fine-tuning performance or introduce excessive trainable parameters, failing to strike a balance between learning performance and parameter efficiency. To address this problem, we propose a novel tuning method named collaborative low-rank adaptation (CLoRA) in this paper. CLoRA consists of base-space sharing and sample-agnostic diversity enhancement (SADE) components. To maintain parameter efficiency while expanding the learning capacity of low-rank modules (LRMs), base-space sharing allows all LRMs to share a set of down/up-projection spaces. In CLoRA, the low-rank matrices obtained from the shared spaces collaboratively construct each LRM. Since the representations extracted by these matrices may contain redundant information, SADE is employed to regularize the similarities among them to encourage diverse representations in the training process. We conduct extensive experiments on widely used image and point cloud datasets to evaluate the performance of CLoRA. Experimental results demonstrate that CLoRA strikes a better balance between learning performance and parameter efficiency, while requiring the fewest GFLOPs for point cloud analysis, compared with the state-of-the-art methods.
翻译:低秩自适应(LoRA)在微调预训练视觉Transformer以适应各种下游任务方面取得了显著成功。现有研究主要集中于探索更具参数效率的策略或更有效的表征学习方案。然而,这些方法要么牺牲了微调性能,要么引入了过多的可训练参数,未能实现学习性能与参数效率之间的平衡。为解决此问题,本文提出了一种名为协作式低秩自适应(CLoRA)的新型调优方法。CLoRA由基础空间共享与样本无关多样性增强(SADE)组件构成。为在保持参数效率的同时扩展低秩模块(LRM)的学习能力,基础空间共享允许所有LRM共享一组下/上投影空间。在CLoRA中,从共享空间获得的低秩矩阵协作构建每个LRM。由于这些矩阵提取的表征可能包含冗余信息,SADE被用于正则化它们之间的相似性,以在训练过程中鼓励多样化的表征。我们在广泛使用的图像和点云数据集上进行了大量实验以评估CLoRA的性能。实验结果表明,与最先进的方法相比,CLoRA在学习性能与参数效率之间取得了更好的平衡,同时在点云分析中所需的GFLOPs最少。