While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers and convolutional neural networks with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc. Code is available at https://github.com/Visual-Attention-Network.
翻译:虽然最初设计用于自然语言处理(NLP)任务,但自留机制最近通过风暴采取了各种计算机视觉领域,但图像的2D性质给在计算机视觉中应用自留带来了三个挑战。 (1) 将图像当作1D序列忽视了2D结构。 (2) 高分辨率图像的二次复杂程度太昂贵。 (3) 它只包含空间适应性,忽视频道的适应性。在本文件中,我们提议了一个新型的大型内核关注模块,以便在自留中实现自我适应性和远程相关性,同时避免上述问题。我们进一步引入了一个基于LKA的新型神经网络,即视觉注意网络(VAN),虽然非常简单和高效,但VAN在广泛的实验中,包括图像分类、物体探测、语义分解、实例分解等,都超越了最先进的视觉变异体网络和演动神经网络的巨大空间。 代码可在https://github.com/Visual-Atention-Network上查阅。