Deep metric learning aims to learn a deep embedding that can capture the semantic similarity of data points. Given the availability of massive training samples, deep metric learning is known to suffer from slow convergence due to a large fraction of trivial samples. Therefore, most existing methods generally resort to sample mining strategies for selecting nontrivial samples to accelerate convergence and improve performance. In this work, we identify two critical limitations of the sample mining methods, and provide solutions for both of them. First, previous mining methods assign one binary score to each sample, i.e., dropping or keeping it, so they only selects a subset of relevant samples in a mini-batch. Therefore, we propose a novel sample mining method, called Online Soft Mining (OSM), which assigns one continuous score to each sample to make use of all samples in the mini-batch. OSM learns extended manifolds that preserve useful intraclass variances by focusing on more similar positives. Second, the existing methods are easily influenced by outliers as they are generally included in the mined subset. To address this, we introduce Class-Aware Attention (CAA) that assigns little attention to abnormal data samples. Furthermore, by combining OSM and CAA, we propose a novel weighted contrastive loss to learn discriminative embeddings. Extensive experiments on two fine-grained visual categorisation datasets and two video-based person re-identification benchmarks show that our method significantly outperforms the state-of-the-art.
翻译:深入的衡量学习旨在学习能够捕捉数据点的语义相似性的深层嵌入。鉴于大量培训样本的可用性,已知深深深的衡量学习会因大量微小样本而缓慢趋同。因此,大多数现有方法一般采用抽样采矿战略选择非三重样本,以加快趋同和改进性能。在这项工作中,我们找出采样采矿方法的两个关键局限性,并为两者提供解决办法。首先,以前的采矿方法为每个样本指定一个二进制分,即投放或保存,因此,它们只选择一个小型样本中的有关样品。因此,我们建议一种新型的采样方法,称为在线软采矿(OSM),为每个样本分配一个连续的评分,以便利用小型样本中的所有样本加速趋同性样本,从而加快趋同性并改进性能。我们通过更相似的正反性能方法,对现有方法很容易受到外层数的影响,因为它们通常包含在采矿子组中。为了解决这个问题,我们引入了“Aleg-awar at at Remination”(CA) 新的采样方法,我们微小地将S- deligial-degrationalalalalalal-gradugradududustration agilateal dlastidustration 展示了两部的SASA。