Vehicle re-identification (reID) is to identify a target vehicle in different cameras with non-overlapping views. When deploy the well-trained model to a new dataset directly, there is a severe performance drop because of differences among datasets named domain bias. To address this problem, this paper proposes an domain adaptation framework which contains an image-to-image translation network named vehicle transfer generative adversarial network (VTGAN) and an attention-based feature learning network (ATTNet). VTGAN could make images from the source domain (well-labeled) have the style of target domain (unlabeled) and preserve identity information of source domain. To further improve the domain adaptation ability for various backgrounds, ATTNet is proposed to train generated images with the attention structure for vehicle reID. Comprehensive experimental results clearly demonstrate that our method achieves excellent performance on VehicleID dataset.
翻译:车辆再识别(reID)是指在不同相机中确定目标车辆,不重叠视图。当将经过良好训练的模式直接用于新的数据集时,由于被命名为域偏差的数据集之间的差异,性能严重下降。为解决这一问题,本文件提议了一个域性适应框架,其中包含一个图像到图像翻译网络,名为车辆传输基因对抗网络(VTGAN)和一个以关注为基础的特征学习网络(ATTNet)。VTGAN可以从源域(贴好标签的域)制作图像,具有目标域(未贴标签的域)的风格,并保存源域的身份信息。为了进一步提高不同背景的域适应能力,建议ATTNet以关注车辆再识别结构的方式培训生成的图像。全面实验结果清楚地表明,我们的方法在SOICID数据集方面表现出色。