The real-world data tends to be heavily imbalanced and severely skew the data-driven deep neural networks, which makes Long-Tailed Recognition (LTR) a massive challenging task. Existing LTR methods seldom train Vision Transformers (ViTs) with Long-Tailed (LT) data, while the off-the-shelf pretrain weight of ViTs always leads to unfair comparisons. In this paper, we systematically investigate the ViTs' performance in LTR and propose LiVT to train ViTs from scratch only with LT data. With the observation that ViTs suffer more severe LTR problems, we conduct Masked Generative Pretraining (MGP) to learn generalized features. With ample and solid evidence, we show that MGP is more robust than supervised manners. In addition, Binary Cross Entropy (BCE) loss, which shows conspicuous performance with ViTs, encounters predicaments in LTR. We further propose the balanced BCE to ameliorate it with strong theoretical groundings. Specially, we derive the unbiased extension of Sigmoid and compensate extra logit margins to deploy it. Our Bal-BCE contributes to the quick convergence of ViTs in just a few epochs. Extensive experiments demonstrate that with MGP and Bal-BCE, LiVT successfully trains ViTs well without any additional data and outperforms comparable state-of-the-art methods significantly, e.g., our ViT-B achieves 81.0% Top-1 accuracy in iNaturalist 2018 without bells and whistles. Code is available at https://github.com/XuZhengzhuo/LiVT.
翻译：实际世界数据往往严重失衡,并严重扭曲数据驱动的深神经网络,这使得长期引导识别(LTR)成为一项艰巨的艰巨任务。现有的LTR方法很少用长期引导数据培训愿景变异器(Vi-T),而Vi-T(LT)的脱壳前期重量总是导致不公平的比较。在本文中,我们系统地调查 VITs在LTR中的表现,并提议LVT从头到脚地训练ViTs。由于观察到ViTs的精确度问题更为严重,我们进行了MGPGP(MP)来学习通用特征。有了充足和确凿的证据,我们表明MGP比监管的方式更加强大。此外,Binary Crosy Entropy(BE)损失显示了VT的明显性表现,在LTR-1中遇到了困境。我们进一步建议平衡的BCE(BCE)来通过强有力的理论基础基础来改善它。特别是,我们在VIO-T的公正扩展和额外的逻辑边际边际实验中,我们通过BO-LX(B-L-C-C-C-C-C-C-C-C-T)迅速展示数据展示)展示了它。</s>