Hashing that projects data into binary codes has shown extraordinary talents in cross-modal retrieval due to its low storage usage and high query speed. Despite their empirical success on some scenarios, existing cross-modal hashing methods usually fail to cross modality gap when fully-paired data with plenty of labeled information is nonexistent. To circumvent this drawback, motivated by the Divide-and-Conquer strategy, we propose Deep Manifold Hashing (DMH), a novel method of dividing the problem of semi-paired unsupervised cross-modal retrieval into three sub-problems and building one simple yet efficiency model for each sub-problem. Specifically, the first model is constructed for obtaining modality-invariant features by complementing semi-paired data based on manifold learning, whereas the second model and the third model aim to learn hash codes and hash functions respectively. Extensive experiments on three benchmarks demonstrate the superiority of our DMH compared with the state-of-the-art fully-paired and semi-paired unsupervised cross-modal hashing methods.
翻译:在将项目数据纳入二元代码的过程中,由于存储使用率低和查询速度快,项目数据在跨模式检索方面表现出了非凡的才智。尽管在有些情景上取得了经验性的成功,但现有的跨模式散列方法通常无法跨越模式差距,因为在大量标签信息的情况下不存在完全匹配的数据。为避免这一缺陷,我们提议在“分而治之”战略的推动下,采用深层拼图(DMH)这一新方法,将半封闭式跨模式检索问题分为三个子问题,并为每个子问题建立一个简单但有效的模型。具体地说,第一个模型是用来通过补充基于多重学习的半封闭式数据获取模式变量特征的,而第二个模型和第三个模型旨在分别学习集成码和功能的组合式。在三个基准上进行的广泛实验表明,我们半封闭式和半封闭式跨模式的跨模式的集束法与最先进的方法相比,我们的DMH的优势。