Accurately retrieving images that are semantically similar remains a fundamental challenge in computer vision, as traditional methods often fail to capture the relational and contextual nuances of a scene. We introduce PRISm (Pruning-based Image Retrieval via Importance Prediction on Semantic Graphs), a multimodal framework that advances image-to-image retrieval through two novel components. First, the Importance Prediction Module identifies and retains the most critical objects and relational triplets within an image while pruning irrelevant elements. Second, the Edge-Aware Graph Neural Network explicitly encodes relational structure and integrates global visual features to produce semantically informed image embeddings. PRISm achieves image retrieval that closely aligns with human perception by explicitly modeling the semantic importance of objects and their interactions, capabilities largely absent in prior approaches. Its architecture effectively combines relational reasoning with visual representation, enabling semantically grounded retrieval. Extensive experiments on benchmark and real-world datasets demonstrate consistently superior top-ranked performance, while qualitative analyses show that PRISm accurately captures key objects and interactions, producing interpretable and semantically meaningful results.
翻译:准确检索语义相似的图像仍然是计算机视觉领域的一项基础性挑战,因为传统方法往往难以捕捉场景中的关系与上下文细微差别。我们提出了PRISm(基于语义图重要性预测的剪枝式图像检索),这是一个通过两个新颖组件推进图像到图像检索的多模态框架。首先,重要性预测模块识别并保留图像中最关键的物体及关系三元组,同时剪除无关元素。其次,边缘感知图神经网络显式编码关系结构并整合全局视觉特征,以生成语义感知的图像嵌入。PRISm通过显式建模物体及其交互的语义重要性,实现了与人类感知高度契合的图像检索,这种能力在先前方法中普遍缺失。其架构有效结合了关系推理与视觉表征,实现了语义可解释的检索。在基准数据集和真实数据集上的大量实验表明,该方法在Top-K检索性能上持续优于现有方法,定性分析则显示PRISm能准确捕捉关键物体及其交互关系,产生可解释且具有语义意义的结果。