We introduce Long-VITA, a simple yet effective large multi-modal model for long-context visual-language understanding tasks. It is adept at concurrently processing and analyzing modalities of image, video, and text over 4K frames or 1M tokens while delivering advanced performances on short-context multi-modal tasks. We propose an effective multi-modal training schema that starts with large language models and proceeds through vision-language alignment, general knowledge learning, and two sequential stages of long-sequence fine-tuning. We further implement context-parallelism distributed inference and logits-masked language modeling head to scale Long-VITA to infinitely long inputs of images and texts during model inference. Regarding training data, Long-VITA is built on a mix of 17M samples from public datasets only and demonstrates state-of-the-art performance on various multi-modal benchmarks, compared against recent cutting-edge models with internal data. Long-VITA is fully open-source and reproducible.. By leveraging our inference designs, Long-VITA models achieve a remarkable 2x prefill speedup and 4x context length extension in a single node with 8 GPUs. We hope Long-VITA can serve as a competitive baseline and offer valuable insights for the open-source community in advancing long-context multi-modal understanding.
翻译:我们提出了Long-VITA,这是一个简单而有效的大型多模态模型,专为长上下文视觉-语言理解任务设计。该模型能够同时处理和分析超过4K帧图像、视频及100万标记的文本模态,并在短上下文多模态任务中展现出先进的性能。我们提出了一种有效的多模态训练方案,该方案从大型语言模型出发,依次进行视觉-语言对齐、通用知识学习以及两个连续阶段的长序列微调。为进一步扩展模型推理时对无限长图像和文本输入的处理能力,我们实现了上下文并行分布式推理和基于对数掩码的语言建模头。在训练数据方面,Long-VITA仅基于公开数据集的1700万样本混合构建,并在多种多模态基准测试中展现了最先进的性能,优于近期使用内部数据的尖端模型。Long-VITA完全开源且可复现。通过利用我们的推理设计,Long-VITA模型在单节点8个GPU上实现了显著的2倍预填充加速和4倍上下文长度扩展。我们希望Long-VITA能作为一个有竞争力的基线,并为开源社区在推进长上下文多模态理解方面提供有价值的见解。