Compositional zero-shot learning (CZSL) aims to learn the concepts of attributes and objects in seen compositions and to recognize their unseen compositions. Most Contrastive Language-Image Pre-training (CLIP)-based CZSL methods focus on disentangling attributes and objects by leveraging the global semantic representation obtained from the image encoder. However, this representation has limited representational capacity and do not allow for complete disentanglement of the two. To this end, we propose CAMS, which aims to extract semantic features from visual features and perform semantic disentanglement in multidimensional spaces, thereby improving generalization over unseen attribute-object compositions. Specifically, CAMS designs a Gated Cross-Attention that captures fine-grained semantic features from the high-level image encoding blocks of CLIP through a set of latent units, while adaptively suppressing background and other irrelevant information. Subsequently, it conducts Multi-Space Disentanglement to achieve disentanglement of attribute and object semantics. Experiments on three popular benchmarks (MIT-States, UT-Zappos, and C-GQA) demonstrate that CAMS achieves state-of-the-art performance in both closed-world and open-world settings. The code is available at https://github.com/ybyangjing/CAMS.
翻译:组合零样本学习(CZSL)旨在从已见组合中学习属性与物体的概念,并识别其未见组合。大多数基于对比语言-图像预训练(CLIP)的CZSL方法侧重于利用图像编码器获得的全局语义表示来解耦属性与物体。然而,这种表示能力有限,无法实现两者的完全解耦。为此,我们提出CAMS方法,旨在从视觉特征中提取语义特征,并在多维空间中进行语义解耦,从而提升对未见属性-物体组合的泛化能力。具体而言,CAMS设计了一种门控交叉注意力机制,通过一组潜在单元从CLIP的高层图像编码块中捕获细粒度语义特征,同时自适应地抑制背景及其他无关信息。随后,该方法执行多空间解耦以实现属性与物体语义的分离。在三个主流基准数据集(MIT-States、UT-Zappos和C-GQA)上的实验表明,CAMS在封闭世界和开放世界设定下均达到了最先进的性能。代码发布于https://github.com/ybyangjing/CAMS。