Generalizable person re-identification (Re-ID) aims to recognize individuals across unseen cameras and environments. While existing methods rely heavily on limited labeled multi-camera data, we propose DynaMix, a novel method that effectively combines manually labeled multi-camera and large-scale pseudo-labeled single-camera data. Unlike prior works, DynaMix dynamically adapts to the structure and noise of the training data through three core components: (1) a Relabeling Module that refines pseudo-labels of single-camera identities on-the-fly; (2) an Efficient Centroids Module that maintains robust identity representations under a large identity space; and (3) a Data Sampling Module that carefully composes mixed data mini-batches to balance learning complexity and intra-batch diversity. All components are specifically designed to operate efficiently at scale, enabling effective training on millions of images and hundreds of thousands of identities. Extensive experiments demonstrate that DynaMix consistently outperforms state-of-the-art methods in generalizable person Re-ID.
翻译:可泛化行人重识别(Re-ID)旨在识别未知摄像头与环境中的个体。现有方法严重依赖有限标注的多摄像头数据,本文提出DynaMix这一新颖方法,它能有效结合人工标注的多摄像头数据与大规模伪标注的单摄像头数据。与先前工作不同,DynaMix通过三个核心组件动态适应训练数据的结构与噪声:(1)重标注模块:实时优化单摄像头身份的伪标签;(2)高效质心模块:在庞大身份空间下保持鲁棒的身份表征;(3)数据采样模块:精心构建混合数据小批量,以平衡学习复杂度与批内多样性。所有组件均针对大规模场景高效设计,支持在数百万图像与数十万身份上进行有效训练。大量实验表明,DynaMix在可泛化行人重识别任务中持续优于当前最先进方法。