VIP内容

题目: Diverse Image Generation via Self-Conditioned GANs

摘要:

本文介绍了一个简单但有效的无监督方法,以产生现实和多样化的图像,并且训练了一个类条件GAN模型,而不使用手动注释的类标签。相反,模型的条件是标签自动聚类在鉴别器的特征空间。集群步骤自动发现不同的模式,并显式地要求生成器覆盖它们。在标准模式基准测试上的实验表明,该方法在寻址模式崩溃时优于其他几种竞争的方法。并且该方法在ImageNet和Places365这样的大规模数据集上也有很好的表现,与以前的方法相比,提高了图像多样性和标准质量指标。

成为VIP会员查看完整内容
0
17

最新论文

As a new approach to train generative models, \emph{generative adversarial networks} (GANs) have achieved considerable success in image generation. This framework has also recently been applied to data with graph structures. We propose labeled-graph generative adversarial networks (LGGAN) to train deep generative models for graph-structured data with node labels. We test the approach on various types of graph datasets, such as collections of citation networks and protein graphs. Experiment results show that our model can generate diverse labeled graphs that match the structural characteristics of the training data and outperforms all alternative approaches in quality and generality. To further evaluate the quality of the generated graphs, we use them on a downstream task of graph classification, and the results show that LGGAN can faithfully capture the important aspects of the graph structure.

0
0
下载
预览
Top