Versatile 3D tasks (e.g., generation or editing) that distill from Text-to-Image (T2I) diffusion models have attracted significant research interest for not relying on extensive 3D training data. However, T2I models exhibit limitations resulting from prior view bias, which produces conflicting appearances between different views of an object. This bias causes subject-words to preferentially activate prior view features during cross-attention (CA) computation, regardless of the target view condition. To overcome this limitation, we conduct a comprehensive mathematical analysis to reveal the root cause of the prior view bias in T2I models. Moreover, we find different UNet layers show different effects of prior view in CA. Therefore, we propose a novel framework, TD-Attn, which addresses multi-view inconsistency via two key components: (1) the 3D-Aware Attention Guidance Module (3D-AAG) constructs a view-consistent 3D attention Gaussian for subject-words to enforce spatial consistency across attention-focused regions, thereby compensating for the limited spatial information in 2D individual view CA maps; (2) the Hierarchical Attention Modulation Module (HAM) utilizes a Semantic Guidance Tree (SGT) to direct the Semantic Response Profiler (SRP) in localizing and modulating CA layers that are highly responsive to view conditions, where the enhanced CA maps further support the construction of more consistent 3D attention Gaussians. Notably, HAM facilitates semantic-specific interventions, enabling controllable and precise 3D editing. Extensive experiments firmly establish that TD-Attn has the potential to serve as a universal plugin, significantly enhancing multi-view consistency across 3D tasks.
翻译:从文本到图像(T2I)扩散模型中提取的通用三维任务(如生成或编辑)因不依赖大量三维训练数据而引起了广泛的研究兴趣。然而,T2I模型存在由先验视角偏差导致的局限性,这种偏差会导致物体在不同视角下产生外观冲突。该偏差使得主题词在跨注意力(CA)计算过程中,无论目标视角条件如何,都优先激活先验视角特征。为克服这一局限,我们进行了全面的数学分析,揭示了T2I模型中先验视角偏差的根本原因。此外,我们发现不同UNet层在CA中表现出不同程度的先验视角影响。因此,我们提出了一个新颖的框架TD-Attn,通过两个关键组件解决多视角不一致问题:(1)三维感知注意力引导模块(3D-AAG)为主题词构建视角一致的三维注意力高斯分布,以强制注意力聚焦区域的空间一致性,从而弥补二维单视角CA图中空间信息的不足;(2)分层注意力调制模块(HAM)利用语义引导树(SGT)指导语义响应分析器(SRP)定位并调制对视角条件高度敏感的CA层,其中增强的CA图进一步支持构建更一致的三维注意力高斯分布。值得注意的是,HAM支持语义特异性干预,实现可控且精确的三维编辑。大量实验充分证明,TD-Attn有潜力作为通用插件,显著提升各类三维任务中的多视角一致性。