Current image generation models struggle to reliably produce well-formed visual text. In this paper, we investigate a key contributing factor: popular text-to-image models lack character-level input features, making it much harder to predict a word's visual makeup as a series of glyphs. To quantify the extent of this effect, we conduct a series of controlled experiments comparing character-aware vs. character-blind text encoders. In the text-only domain, we find that character-aware models provide large gains on a novel spelling task (WikiSpell). Transferring these learnings onto the visual domain, we train a suite of image generation models, and show that character-aware variants outperform their character-blind counterparts across a range of novel text rendering tasks (our DrawText benchmark). Our models set a much higher state-of-the-art on visual spelling, with 30+ point accuracy gains over competitors on rare words, despite training on far fewer examples.
翻译:当前图像生成模型很难可靠地生成完善的视觉文本。 在本文中, 我们调查了一个关键促成因素: 流行的文本到图像模型缺乏字符级输入功能, 使得预测一个单词的视觉化成成一系列像形体要难得多。 为了量化这种效果的程度, 我们进行了一系列受控实验, 比较字符认知和字符盲文本编码器。 在文本专用域中, 我们发现, 字符识别模型提供了新颖拼写任务( WikiSpell) 的巨大收益 。 将这些学习内容转移到视觉域, 我们训练了一组图像生成模型, 并展示了字符认知变体在一系列新文本拼写任务( 我们的 DrawText 基准) 中超越了字符盲对应方。 我们的模型在视觉拼写上设置了更高级的艺术, 在稀有文字上比竞争者获得30+点精度收益, 尽管培训的例子要少得多。