Text-to-Image (T2I) models are capable of generating high-quality artistic creations and visual content. However, existing research and evaluation standards predominantly focus on image realism and shallow text-image alignment, lacking a comprehensive assessment of complex semantic understanding and world knowledge integration in text-to-image generation. To address this challenge, we propose \textbf{WISE}, the first benchmark specifically designed for \textbf{W}orld Knowledge-\textbf{I}nformed \textbf{S}emantic \textbf{E}valuation. WISE moves beyond simple word-pixel mapping by challenging models with 1000 meticulously crafted prompts across 25 subdomains in cultural common sense, spatio-temporal reasoning, and natural science. To overcome the limitations of traditional CLIP metric, we introduce \textbf{WiScore}, a novel quantitative metric for assessing knowledge-image alignment. Through comprehensive testing of 20 models (10 dedicated T2I models and 10 unified multimodal models) using 1,000 structured prompts spanning 25 subdomains, our findings reveal significant limitations in their ability to effectively integrate and apply world knowledge during image generation, highlighting critical pathways for enhancing knowledge incorporation and application in next-generation T2I models. Code and data are available at \href{https://github.com/PKU-YuanGroup/WISE}{PKU-YuanGroup/WISE}.
翻译:文本到图像(T2I)模型能够生成高质量的艺术创作和视觉内容。然而,现有研究和评估标准主要关注图像真实性和浅层的文本-图像对齐,缺乏对文本到图像生成中复杂语义理解和世界知识整合的全面评估。为应对这一挑战,我们提出了首个专门为世界知识引导语义评估设计的基准——WISE。WISE超越了简单的词-像素映射,通过涵盖文化常识、时空推理和自然科学三大领域共25个子领域的1000个精心设计的提示词来挑战模型。为克服传统CLIP指标的局限性,我们引入了WiScore,一种用于评估知识-图像对齐的新型量化指标。通过对20个模型(包括10个专用T2I模型和10个统一多模态模型)使用覆盖25个子领域的1000个结构化提示词进行全面测试,我们的研究揭示了这些模型在图像生成过程中有效整合与应用世界知识方面存在显著局限,为下一代T2I模型增强知识融合与应用能力指明了关键路径。代码与数据可在PKU-YuanGroup/WISE获取。