Visual attention in Visual Question Answering (VQA) targets at locating the right image regions regarding the answer prediction, offering a powerful technique to promote multi-modal understanding. However, recent studies have pointed out that the highlighted image regions from the visual attention are often irrelevant to the given question and answer, leading to model confusion for correct visual reasoning. To tackle this problem, existing methods mostly resort to aligning the visual attention weights with human attentions. Nevertheless, gathering such human data is laborious and expensive, making it burdensome to adapt well-developed models across datasets. To address this issue, in this paper, we devise a novel visual attention regularization approach, namely AttReg, for better visual grounding in VQA. Specifically, AttReg firstly identifies the image regions which are essential for question answering yet unexpectedly ignored (i.e., assigned with low attention weights) by the backbone model. And then a mask-guided learning scheme is leveraged to regularize the visual attention to focus more on these ignored key regions. The proposed method is very flexible and model-agnostic, which can be integrated into most visual attention-based VQA models and require no human attention supervision. Extensive experiments over three benchmark datasets, i.e., VQA-CP v2, VQA-CP v1, and VQA v2, have been conducted to evaluate the effectiveness of AttReg. As a by-product, when incorporating AttReg into the strong baseline LMH, our approach can achieve a new state-of-the-art accuracy of 60.00% with an absolute performance gain of 7.01% on the VQA-CP v2 benchmark dataset...
翻译:视觉问题解答(VQA) 目标的视觉关注在定位正确的图像区域以找到答案预测的答案,提供了促进多式理解的有力技术。然而,最近的研究表明,视觉关注中突出的图像区域往往与特定问答无关,导致对正确视觉推理的模型混淆。为了解决这一问题,现有方法大多是将视觉关注的重量与人类的注意力联系起来。然而,收集这样的人类数据既费力又昂贵,使得在数据集之间调整完善的模型变得很麻烦。为了解决这个问题,我们在本文件中设计了一种新的视觉关注正规化方法,即AttReg,以便在VQA中更好地进行视觉定位。 具体地说,AttReg首先确定对于回答问题却被主干模型意外忽略(即低关注的重量)至关重要的图像区域。随后,蒙蒙蒙引导的学习计划使视觉关注更加集中于这些被忽略的关键区域。 拟议的方法非常灵活和模型化,可以纳入以视觉关注为基础的VCP Re-A 基线方法, 在以VA 基准2 上,需要一种对VA 基准数据进行比较, VA 和 VQ 进行新的关注, 在VA 进行新的 VA 测试时, VA 进行新的 V.