Recent large-scale image generation models such as Stable Diffusion have exhibited an impressive ability to generate fairly realistic images starting from a very simple text prompt. Could such models render real images obsolete for training image prediction models? In this paper, we answer part of this provocative question by questioning the need for real images when training models for ImageNet classification. More precisely, provided only with the class names that have been used to build the dataset, we explore the ability of Stable Diffusion to generate synthetic clones of ImageNet and measure how useful they are for training classification models from scratch. We show that with minimal and class-agnostic prompt engineering those ImageNet clones we denote as ImageNet-SD are able to close a large part of the gap between models produced by synthetic images and models trained with real images for the several standard classification benchmarks that we consider in this study. More importantly, we show that models trained on synthetic images exhibit strong generalization properties and perform on par with models trained on real data.
翻译:最近大型图像生成模型(如Sclast Difilation)展现了令人印象深刻的能力,能够从非常简单的文字提示开始产生相当现实的图像。这些模型能否使真正的图像过时用于培训图像预测模型? 在本文中,我们通过质疑图像网络分类培训模型对真实图像的需求来回答这一部分挑衅性问题。更准确地说,我们只提供了用于建立数据集的类名,我们探索了稳定数据传输生成合成图像网络克隆的能力,并测量了它们从零开始对分类模型进行培训的有用性。我们显示,通过最起码的和等级的快速工程,我们作为图像网络-SD的注解的图像网络克隆能够缩小合成图像制作模型与经过实际图像培训的模型与我们研究中考虑的若干标准分类基准之间的大部分差距。更重要的是,我们展示了经过合成图像培训的模型具有很强的普及性,并且与经过真实数据培训的模型同步。