Text-guided 3D object generation aims to generate 3D objects described by user-defined captions, which paves a flexible way to visualize what we imagined. Although some works have been devoted to solving this challenging task, these works either utilize some explicit 3D representations (e.g., mesh), which lack texture and require post-processing for rendering photo-realistic views; or require individual time-consuming optimization for every single case. Here, we make the first attempt to achieve generic text-guided cross-category 3D object generation via a new 3D-TOGO model, which integrates a text-to-views generation module and a views-to-3D generation module. The text-to-views generation module is designed to generate different views of the target 3D object given an input caption. prior-guidance, caption-guidance and view contrastive learning are proposed for achieving better view-consistency and caption similarity. Meanwhile, a pixelNeRF model is adopted for the views-to-3D generation module to obtain the implicit 3D neural representation from the previously-generated views. Our 3D-TOGO model generates 3D objects in the form of the neural radiance field with good texture and requires no time-cost optimization for every single caption. Besides, 3D-TOGO can control the category, color and shape of generated 3D objects with the input caption. Extensive experiments on the largest 3D object dataset (i.e., ABO) are conducted to verify that 3D-TOGO can better generate high-quality 3D objects according to the input captions across 98 different categories, in terms of PSNR, SSIM, LPIPS and CLIP-score, compared with text-NeRF and Dreamfields.
翻译:文本引导 3D 对象生成的目的是生成由用户定义的字幕描述的 3D 对象,这为直观地展示我们想象的事物铺平了灵活的方式。虽然有些作品致力于解决这一具有挑战性的任务,但这些作品要么使用一些清晰的 3D 表示(例如网目),这些表示缺乏纹理,需要后处理来提供照片现实化观点;或者需要为每个案例分别进行耗时的优化。这里,我们第一次尝试通过一个新的 3D-TOGO 模型实现通用文本引导的跨D 3D 对象生成,该模型将一个文本到视图的生成模块和视图到3D 生成的生成模块。 3D 文本生成的文本旨在产生不同的观点 3D 3D ; 3D 3O 数据生成的模型和3D 3D 格式, 以不同时间、 3OD 和3GO 格式生成的文本 。