We launch EVA-02, a next-generation Transformer-based visual representation pre-trained to reconstruct strong and robust language-aligned vision features via masked image modeling. With an updated plain Transformer architecture as well as extensive pre-training from an open & accessible giant CLIP vision encoder, EVA-02 demonstrates superior performance compared to prior state-of-the-art approaches across various representative vision tasks, while utilizing significantly fewer parameters and compute budgets. Notably, using exclusively publicly accessible training data, EVA-02 with only 304M parameters achieves a phenomenal 90.0 fine-tuning top-1 accuracy on ImageNet-1K val set. Additionally, our EVA-02-CLIP can reach up to 80.4 zero-shot top-1 on ImageNet-1K, outperforming the previous largest & best open-sourced CLIP with only ~1/6 parameters and ~1/6 image-text training data. We offer four EVA-02 variants in various model sizes, ranging from 6M to 304M parameters, all with impressive performance. To facilitate open access and open research, we release the complete suite of EVA-02 to the community at https://github.com/baaivision/EVA/tree/master/EVA-02.
翻译:我们推出了EVA-02,一种基于Transformer的下一代视觉表示,经过蒙版图像建模预训练,以重建强大而健壮的语言对齐视觉特征。采用更新的平面Transformer架构以及从开放和可访问的巨型CLIP视觉编码器中进行广泛的预训练,EVA-02在各种代表性视觉任务上表现优越,而使用的参数和计算预算显著更少。值得注意的是,仅使用公开可访问的训练数据,304M参数的EVA-02在ImageNet-1K验证集上实现了惊人的90.0微调top-1准确率。此外,我们的EVA-02-CLIP在ImageNet-1K中可达到80.4的零射击top-1,优于以往最大和最好的开源CLIP,仅使用了约1/6的参数和约1/6的图像文本训练数据。我们提供了四种EVA-02变体,覆盖各种模型大小,从6M到304M参数不等,均表现出色。为促进开放访问和开放研究,我们发布了完整的EVA-02套件,供社区使用,地址为https://github.com/baaivision/EVA/tree/master/EVA-02。