Visual concept personalization aims to transfer only specific image attributes, such as identity, expression, lighting, and style, into unseen contexts. However, existing methods rely on holistic embeddings from general-purpose image encoders, which entangle multiple visual factors and make it difficult to isolate a single attribute. This often leads to information leakage and incoherent synthesis. To address this limitation, we introduce Omni-Attribute, the first open-vocabulary image attribute encoder designed to learn high-fidelity, attribute-specific representations. Our approach jointly designs the data and model: (i) we curate semantically linked image pairs annotated with positive and negative attributes to explicitly teach the encoder what to preserve or suppress; and (ii) we adopt a dual-objective training paradigm that balances generative fidelity with contrastive disentanglement. The resulting embeddings prove effective for open-vocabulary attribute retrieval, personalization, and compositional generation, achieving state-of-the-art performance across multiple benchmarks.
翻译:视觉概念个性化旨在仅将特定图像属性(如身份、表情、光照和风格)迁移到未见过的上下文中。然而,现有方法依赖于通用图像编码器提取的整体嵌入,这些嵌入往往纠缠了多种视觉因素,使得难以分离单一属性,常导致信息泄漏与合成结果不连贯。为克服这一局限,我们提出了Omni-Attribute——首个开放词汇图像属性编码器,旨在学习高保真、属性特定的表示。我们的方法协同设计了数据与模型:(i)我们构建了带有正负属性标注的语义关联图像对,以显式指导编码器保留或抑制哪些特征;(ii)采用双目标训练范式,在生成保真度与对比解耦之间取得平衡。所得嵌入在开放词汇属性检索、个性化及组合生成任务中均表现优异,在多个基准测试中达到了最先进的性能。