Occluded person re-identification (Re-ID) in images captured by multiple cameras is challenging because the target person is occluded by pedestrians or objects, especially in crowded scenes. In addition to the processes performed during holistic person Re-ID, occluded person Re-ID involves the removal of obstacles and the detection of partially visible body parts. Most existing methods utilize the off-the-shelf pose or parsing networks as pseudo labels, which are prone to error. To address these issues, we propose a novel Occlusion Correction Network (OCNet) that corrects features through relational-weight learning and obtains diverse and representative features without using external networks. In addition, we present a simple concept of a center feature in order to provide an intuitive solution to pedestrian occlusion scenarios. Furthermore, we suggest the idea of Separation Loss (SL) for focusing on different parts between global features and part features. We conduct extensive experiments on five challenging benchmark datasets for occluded and holistic Re-ID tasks to demonstrate that our method achieves superior performance to state-of-the-art methods especially on occluded scene.
翻译:在多个照相机摄取的图像中,被隔离者重新识别(重新识别)是具有挑战性的,因为目标人被行人或物体所包围,特别是在拥挤的场景中。除了在整体人重新识别过程中所执行的过程外,被隔离者重新识别还涉及清除障碍和探测部分可见身体部分。大多数现有方法将现成的外形或剖析网络用作伪标签,容易出错。为了解决这些问题,我们提议建立一个新型封闭校正网络(OCNet),通过关系重量学习校正特征,获得多样性和代表性特征,而不使用外部网络。此外,我们提出一个中心特征的简单概念,以便为行人隔离情景提供一个直观的解决办法。此外,我们提出分离损失的概念,即侧重于全球特征和部分特征之间的不同部分。我们就隐蔽和整体再识别任务对五个具有挑战性的基准数据集进行了广泛的实验,以证明我们的方法取得了优于状态的方法,特别是在隐蔽场景上。