The Convolutional Neural Networks (CNNs) generate the feature representation of complex objects by collecting hierarchical and different parts of semantic sub-features. These sub-features can usually be distributed in grouped form in the feature vector of each layer, representing various semantic entities. However, the activation of these sub-features is often spatially affected by similar patterns and noisy backgrounds, resulting in erroneous localization and identification. We propose a Spatial Group-wise Enhance (SGE) module that can adjust the importance of each sub-feature by generating an attention factor for each spatial location in each semantic group, so that every individual group can autonomously enhance its learnt expression and suppress possible noise. The attention factors are only guided by the similarities between the global and local feature descriptors inside each group, thus the design of SGE module is extremely lightweight with \emph{almost no extra parameters and calculations}. Despite being trained with only category supervisions, the SGE component is extremely effective in highlighting multiple active areas with various high-order semantics (such as the dog's eyes, nose, etc.). When integrated with popular CNN backbones, SGE can significantly boost the performance of image recognition tasks. Specifically, based on ResNet50 backbones, SGE achieves 1.2\% Top-1 accuracy improvement on the ImageNet benchmark and 1.0$\sim$2.0\% AP gain on the COCO benchmark across a wide range of detectors (Faster/Mask/Cascade RCNN and RetinaNet). Codes and pretrained models are available at https://github.com/implus/PytorchInsight.
翻译: Convolual Neal 网络(CNNs) 通过收集语义子功能组中每个空间位置的注意系数和不同部分, 生成复杂对象的特征表示。 这些子特性通常可以以每个层次的特性矢量分组形式分布, 代表各种语义实体。 然而, 启动这些次特性往往受到类似模式和噪音背景的空间影响, 导致错误的本地化和识别。 我们提议一个空间组增强模块, 它可以调整每个子特性的大小, 从而在每个语义组中生成每个空间位置的注意系数, 使每个单独的组可以自动地加强其所学的表达方式和抑制可能的噪音。 这些注意因素只能由每个组中全球和地方特性标量的相似性来引导。 因此, SGE 模块的设计非常轻, 导致错误的本地化和识别。 尽管我们只接受过分类监督, SGEE 部分部分的多个活跃区域( 如狗眼、 鼻子/ 网络 等), 在SBISG 和 基础 图像 上, 可以大大提升 SGEA 的成绩 。