Person Search is designed to jointly solve the problems of Person Detection and Person Re-identification (Re-ID), in which the target person will be located in a large number of uncut images. Over the past few years, Person Search based on deep learning has made great progress. Visual character attributes play a key role in retrieving the query person, which has been explored in Re-ID but has been ignored in Person Search. So, we introduce attribute learning into the model, allowing the use of attribute features for retrieval task. Specifically, we propose a simple and effective model called Multi-Attribute Enhancement (MAE) which introduces attribute tags to learn local features. In addition to learning the global representation of pedestrians, it also learns the local representation, and combines the two aspects to learn robust features to promote the search performance. Additionally, we verify the effectiveness of our module on the existing benchmark dataset, CUHK-SYSU and PRW. Ultimately, our model achieves state-of-the-art among end-to-end methods, especially reaching 91.8% of mAP and 93.0% of rank-1 on CUHK-SYSU. Codes and models are available at https://github.com/chenlq123/MAE.
翻译:个人搜索旨在共同解决个人探测和重新识别(Re-ID)问题,其中目标人将位于大量未切割的图像中。过去几年里,基于深层学习的人搜索取得了巨大进展。视觉字符属性在检索查询人方面发挥着关键作用,在重新识别中已经探索了该查询人,但在个人搜索中被忽略了。因此,我们将属性学习引入模型,允许在检索任务中使用属性特征。具体地说,我们提议了一个简单而有效的模式,称为多属性增强(MAE),为学习本地特征引入属性标记。除了学习行人的全球代表性外,它还学习了本地代表性,并结合了两个方面学习强健的特征以促进搜索性能。此外,我们核查了我们关于现有基准数据集、 CUHK-SYSU和PRW的模块的有效性。最终,我们的模型在终端到终端方法中达到了状态艺术,特别是达到 mAP的91.8%和CUHK/SYSUM的93%-1级模型和AMHUK/SY/SYSUSUs。 代码和M & ams 和MAC & ams ams 和MACUL & 。