Conditional image synthesis aims to create an image according to some multi-modal guidance in the forms of textual descriptions, reference images, and image blocks to preserve, as well as their combinations. In this paper, instead of investigating these control signals separately, we propose a new two-stage architecture, M6-UFC, to unify any number of multi-modal controls. In M6-UFC, both the diverse control signals and the synthesized image are uniformly represented as a sequence of discrete tokens to be processed by Transformer. Different from existing two-stage autoregressive approaches such as DALL-E and VQGAN, M6-UFC adopts non-autoregressive generation (NAR) at the second stage to enhance the holistic consistency of the synthesized image, to support preserving specified image blocks, and to improve the synthesis speed. Further, we design a progressive algorithm that iteratively improves the non-autoregressively generated image, with the help of two estimators developed for evaluating the compliance with the controls and evaluating the fidelity of the synthesized image, respectively. Extensive experiments on a newly collected large-scale clothing dataset M2C-Fashion and a facial dataset Multi-Modal CelebA-HQ verify that M6-UFC can synthesize high-fidelity images that comply with flexible multi-modal controls.
翻译:在M6-UFC中,各种控制信号和合成图像都统一代表为由变压器处理的离散象征物的序列。不同于现有的两阶段自动递增方法,如DAL-E和VQGAN, M6-UFC在第二阶段采用非侵略性生成(NAR),目的是提高合成图像的整体一致性,支持保存特定图像块,并改进合成速度。此外,我们设计了一种渐进式算法,以迭接方式改进非反向生成图像,由两位灵活估计者帮助评估综合图像的遵守情况并评估其准确性,分别与新收集的多层次图像和高层次多层次图像的多层次化数据同步。