We present CSWin Transformer, an efficient and effective Transformer-based backbone for general-purpose vision tasks. A challenging issue in Transformer design is that global self-attention is very expensive to compute whereas local self-attention often limits the field of interactions of each token. To address this issue, we develop the Cross-Shaped Window self-attention mechanism for computing self-attention in the horizontal and vertical stripes in parallel that form a cross-shaped window, with each stripe obtained by splitting the input feature into stripes of equal width. We provide a mathematical analysis of the effect of the stripe width and vary the stripe width for different layers of the Transformer network which achieves strong modeling capability while limiting the computation cost. We also introduce Locally-enhanced Positional Encoding (LePE), which handles the local positional information better than existing encoding schemes. LePE naturally supports arbitrary input resolutions, and is thus especially effective and friendly for downstream tasks. Incorporated with these designs and a hierarchical structure, CSWin Transformer demonstrates competitive performance on common vision tasks. Specifically, it achieves 85.4\% Top-1 accuracy on ImageNet-1K without any extra training data or label, 53.9 box AP and 46.4 mask AP on the COCO detection task, and 52.2 mIOU on the ADE20K semantic segmentation task, surpassing previous state-of-the-art Swin Transformer backbone by +1.2, +2.0, +1.4, and +2.0 respectively under the similar FLOPs setting. By further pretraining on the larger dataset ImageNet-21K, we achieve 87.5% Top-1 accuracy on ImageNet-1K and high segmentation performance on ADE20K with 55.7 mIoU. The code and models are available at https://github.com/microsoft/CSWin-Transformer.
翻译:我们为通用愿景任务提供了CSWin 变换器,这是一个高效而有效的基于U的骨干。在变换器设计中,一个具有挑战性的问题是,计算全球自省的成本非常昂贵,而本地自省往往限制每个象征的相互作用领域。为解决这一问题,我们开发了跨共享窗口自省机制,用于计算水平和垂直条纹中的自关注,从而形成一个交叉形状的窗口,通过将输入特征分解成同等宽度的条纹而获得的每个条纹。我们提供了对条纹宽影响的数学分析,并改变了变换器网络不同层的条纹宽度,这些层在限制计算成本的同时实现了强大的建模能力。为了解决这个问题,我们开发了跨共享窗口自我关注机制,在水平和垂直条纹线上计算自控自控自控自控信息, LePE 自然支持任意输入分辨率,从而对下游任务特别有效且友好。通过这些设计和一个等级结构,CSBEVin更高级的变换工具在共同愿景任务上展示了竞争性业绩。 具体地,在SOVOOOT AS-l IM IM IM 上,在SOOOO 服务器上,在S-I ASud Teal Treal Teal Treal Treal Treal Treal Tal laveyal lade lax laut 和O 和在S 上,在S AS Teltal t IS Teltal IS Treal lade Teldal lade lade lade ladeal lade lade lade lade lade lade lade lade lade lade lade lade lade lade ladeal ladeal ladeal ladeal lade ladeal ladeal ladeal lade ladealdal ladeal ladeal ladeal ladeal lade lader lade lax laut上,在S lader lader lader lade lade lade lader lader lax lader laut s la