Off-the-shelf convolutional neural network features achieve outstanding results in many image retrieval tasks. However, their invariance is pre-defined by the network architecture and training data. Existing image retrieval approaches require fine-tuning or modification of the pre-trained networks to adapt to the variations in the target data. In contrast, our method enhances the invariance of off-the-shelf features by aggregating features extracted from images augmented with learned test-time augmentations. The optimal ensemble of test-time augmentations is learned automatically through reinforcement learning. Our training is time and resources efficient, and learns a diverse test-time augmentations. Experiment results on trademark retrieval (METU trademark dataset) and landmark retrieval (Oxford5k and Paris6k scene datasets) tasks show the learned ensemble of transformations is effective and transferable. We also achieve state-of-the-art MAP@100 results on the METU trademark dataset.
翻译:现有图像检索方法要求对预培训的网络进行微调或修改,以适应目标数据的变异。相比之下,我们的方法通过集成从图像中提取的功能,加上经学习的测试-时间增强功能,提高了现成特征的变异性。测试-时间增强功能的最佳组合是通过强化学习自动学习的。我们的培训是时间和资源效率高的,并且学习了多种测试-时间增强功能。商标检索(METU商标数据集)和里程碑检索(Oxford5k和Paris6k场景数据集)的实验结果显示,学到的变异组合是有效和可转让的。我们还在METU商标数据集上实现了最先进的MAP@100结果。