生成对抗网络 (Generative Adversarial Network, GAN) 是一类神经网络,通过轮流训练判别器 (Discriminator) 和生成器 (Generator),令其相互对抗,来从复杂概率分布中采样,例如生成图片、文字、语音等。GAN 最初由 Ian Goodfellow 提出,原论文见 Generative Adversarial Networks

知识荟萃

生成对抗网络(GAN)专知荟萃

一、理论学习

  1. 训练GANs的技巧    
    

参考链接:[http://papers.nips.cc/paper/6124-improved-techniques-for-training-gans.pdf] 2. Energy-Based GANs 以及Yann Le Cun 的相关研究
参考链接:[http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf] 3. 模式正则化GAN
参考链接:[https://arxiv.org/pdf/1612.02136.pdf] 4. 最新NIPS2016也有最新的关于训练GAN模型的总结
参考链接:[https://github.com/soumith/ganhacks] 5. The GAN Zoo千奇百怪的生成对抗网络,都在这里了。你没看错,里面已经有有近百个了。
参考链接: [https://github.com/hindupuravinash/the-gan-zoo]

二、综述

1.中科院自动化所 中文综述 《生成式对抗网络 GAN 的研究进展与展望》
参考链接:[https://pan.baidu.com/s/1dEMITo9] 密码: qqcc

三、报告

  1. Ian Goodfellow的GANs报告ICCV 2017
    参考链接:[https://pan.baidu.com/s/1bpIZvfL]
  2. Ian Goodfellow的GANs报告ICCV 2017的中文讲稿
    参考链接:[https://mp.weixin.qq.com/s/nPBFrnO3_QJjAzm37G5ceQ]
  3. Ian Goodfellow的GANs报告NIPS 2016
    参考链接:[http://www.iangoodfellow.com/slides/2016-12-04-NIPS.pdf]
  4. Ian Goodfellow的GANs报告NIPS 2016 的中文讲稿
    参考链接:[http://www.sohu.com/a/121189842_465975]
  5. Russ Salakhutdinov的深度生成模型
    参考链接:[http://www.cs.toronto.edu/~rsalakhu/talk_Montreal_2016_Salakhutdinov.pdf]

四、教程

  1. NIPS 2016教程:生成对抗网络     
    

参考链接:[https://arxiv.org/pdf/1701.00160.pdf] 2. 训练GANs的技巧和窍门
参考链接:[https://github.com/soumith/ganhacks] 3. OpenAI生成模型
参考链接:[https://blog.openai.com/generative-models/] 4. 用Keras实现MNIST生成对抗模型
参考链接:[https://oshearesearch.com/index.PHP/2016/07/01/mnist-generative-adversarial-model-in-keras/] 5. 用深度学习TensorFlow实现图像修复
参考链接:[http://bamos.github.io/2016/08/09/deep-completion/]

五、中文博客资料

1.生成对抗网络初学入门:一文读懂GAN的基本原理
[http://www.xtecher.com/Xfeature/view?aid=7496]
2.深入浅出:GAN原理与应用入门介绍
[https://zhuanlan.zhihu.com/p/28731033]
3.港理工在读博士李嫣然深入浅出GAN之应用篇
参考链接:链接: [https://pan.baidu.com/s/1o8n4UDk] 密码: 78wt
4.萌物生成器:如何使用四种GAN制造猫图
参考链接:[https://zhuanlan.zhihu.com/p/27769807]
5.GAN学习指南:从原理入门到制作生成Demo
参考链接:[https://zhuanlan.zhihu.com/p/24767059x]
6.生成式对抗网络GAN研究进展
参考链接:[http://blog.csdn.net/solomon1558/article/details/52537114]
7.生成对抗网络(GAN)的前沿进展(论文、报告、框架和Github资源)汇总
参考链接:[http://blog.csdn.net/love666666shen/article/details/74953970]

六、Github资源以及模型

  1. 深度卷积生成对抗模型(DCGAN)   
    

参考链接:[https://github.com/Newmu/dcgan_code]
2. TensorFlow实现深度卷积生成对抗模型(DCGAN)
参考链接:[https://github.com/carpedm20/DCGAN-tensorflow]
3. Torch实现深度卷积生成对抗模型(DCGAN)
参考链接:[https://github.com/soumith/dcgan.torch]
4. Keras实现深度卷积生成对抗模型(DCGAN)
参考链接:[https://github.com/jacobgil/keras-dcgan]
5. 使用神经网络生成自然图像(Facebook的Eyescream项目)
参考链接:[https://github.com/facebook/eyescream]
6. 对抗自编码(AdversarialAutoEncoder)
参考链接:[https://github.com/musyoku/adversarial-autoencoder]
7. 利用ThoughtVectors 实现文本到图像的合成
参考链接:[https://github.com/paarthneekhara/text-to-image]
8. 对抗样本生成器(Adversarialexample generator)
参考链接:[https://github.com/e-lab/torch-toolbox/tree/master/Adversarial]
9. 深度生成模型的半监督学习
参考链接:[https://github.com/dpkingma/nips14-ssl]
10. GANs的训练方法
参考链接:[https://github.com/openai/improved-gan]
11. 生成式矩匹配网络(Generative Moment Matching Networks, GMMNs)
参考链接:[https://github.com/yujiali/gmmn]
12. 对抗视频生成
参考链接:[https://github.com/dyelax/Adversarial_Video_Generation]
13. 基于条件对抗网络的图像到图像翻译(pix2pix)
参考链接:[https://github.com/phillipi/pix2pix]
14. 对抗机器学习库Cleverhans,
参考链接:[https://github.com/openai/cleverhans]

七、最新研究论文

2014

  1. 对抗实例的解释和利用(Explaining and Harnessing Adversarial Examples)2014
    原文链接:[https://arxiv.org/pdf/1412.6572.pdf]
  2. 基于深度生成模型的半监督学习( Semi-Supervised Learning with Deep Generative Models )2014
    原文链接:[https://arxiv.org/pdf/1406.5298v2.pdf]
  3. 条件生成对抗网络(Conditional Generative Adversarial Nets)2014
    原文链接:[https://arxiv.org/pdf/1411.1784v1.pdf]

2015

  1. 基于深度卷积生成对抗网络的无监督学习(Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks (DCGANs))2015
    原文链接:[https://arxiv.org/pdf/1511.06434v2.pdf]
  2. 基于拉普拉斯金字塔生成式对抗网络的深度图像生成模型(Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks)2015
    原文链接:[http://papers.nips.cc/paper/5773-deep-generative-image-models-using-a-5. laplacian-pyramid-of-adversarial-networks.pdf]
  3. 生成式矩匹配网络(Generative Moment Matching Networks)2015
    原文链接:[http://proceedings.mlr.press/v37/li15.pdf]
  4. 超越均方误差的深度多尺度视频预测(Deep multi-scale video prediction beyond mean square error)2015
    原文链接:[https://arxiv.org/pdf/1511.05440.pdf]
  5. 通过学习相似性度量的超像素自编码(Autoencoding beyond pixels using a learned similarity metric)2015
    原文链接:[https://arxiv.org/pdf/1512.09300.pdf]
  6. 对抗自编码(Adversarial Autoencoders)2015
    原文链接:[https://arxiv.org/pdf/1511.05644.pdf]
  7. 基于像素卷积神经网络的条件生成图片(Conditional Image Generation with PixelCNN Decoders)2015
    原文链接:[https://arxiv.org/pdf/1606.05328.pdf]
  8. 通过平均差异最大优化训练生成神经网络(Training generative neural networks via Maximum Mean Discrepancy optimization)2015
    原文链接:[https://arxiv.org/pdf/1505.03906.pdf]

2016

  1. 训练GANs的一些技巧(Improved Techniques for Training GANs)2016
    原文链接:[https://arxiv.org/pdf/1606.03498v1.pdf]
  2. InfoGAN:基于信息最大化GANs的可解释表达学习(InfoGAN:Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets)2016
    原文链接:[https://arxiv.org/pdf/1606.03657v1.pdf]
    3.上下文像素编码:通过修复进行特征学习(Context Encoders: Feature Learning by Inpainting)2016 原文链接: [http://www.cvfoundation.org/openaccess/content_cvpr_2016/papers/Pathak_Context_Encoders_Feature_CVPR_2016_paper.pdf]
  3. 生成对抗网络实现文本合成图像(Generative Adversarial Text to Image Synthesis)2016
    原文链接:[http://proceedings.mlr.press/v48/reed16.pdf]
  4. 对抗特征学习(Adversarial Feature Learning)2016
    原文链接:[https://arxiv.org/pdf/1605.09782.pdf]
  5. 结合逆自回归流的变分推理(Improving Variational Inference with Inverse Autoregressive Flow )2016
    原文链接: [https://papers.nips.cc/paper/6581-improving-variational-autoencoders-with-inverse-autoregressive-flow.pdf]
  6. 深度学习系统对抗样本黑盒攻击(Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples)2016
    原文链接:[https://arxiv.org/pdf/1602.02697.pdf]
  7. 参加,推断,重复:基于生成模型的快速场景理解(Attend, infer, repeat: Fast scene understanding with generative models)2016
    原文链接:[https://arxiv.org/pdf/1603.08575.pdf]
  8. f-GAN: 使用变分散度最小化训练生成神经采样器(f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization )2016
    原文链接: [http://papers.nips.cc/paper/6066-tagger-deep-unsupervised-perceptual-grouping.pdf]
    10.在自然图像流形上的生成视觉操作(Generative Visual Manipulation on the Natural Image Manifold)2016
    原文链接:[https://arxiv.org/pdf/1609.03552.pdf]
  9. 对抗性推断学习(Adversarially Learned Inference)2016
    原文链接:[https://arxiv.org/pdf/1606.00704.pdf]
  10. 基于循环对抗网络的图像生成(Generating images with recurrent adversarial networks)2016
    原文链接:[https://arxiv.org/pdf/1602.05110.pdf]
  11. 生成对抗模仿学习(Generative Adversarial Imitation Learning)2016
    原文链接:[http://papers.nips.cc/paper/6391-generative-adversarial-imitation-learning.pdf]
  12. 基于3D生成对抗模型学习物体形状的概率隐空间(Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling)2016
    原文链接:[https://arxiv.org/pdf/1610.07584.pdf]
  13. 学习画画(Learning What and Where to Draw)2016
    原文链接:[https://arxiv.org/pdf/1610.02454v1.pdf]
  14. 基于辅助分类器GANs的条件图像合成(Conditional Image Synthesis with Auxiliary Classifier GANs)2016
    原文链接:[https://arxiv.org/pdf/1610.09585.pdf]
  15. 隐生成模型的学习(Learning in Implicit Generative Models)2016
    原文:[https://arxiv.org/pdf/1610.03483.pdf]
  16. VIME: 变分信息最大化探索(VIME: Variational Information Maximizing Exploration)2016
    原文链接: [http://papers.nips.cc/paper/6591-vime-variational-information-maximizing-exploration.pdf]
  17. 生成对抗网络的展开(Unrolled Generative Adversarial Networks)2016
    原文链接:[https://arxiv.org/pdf/1611.02163.pdf]
  18. 基于内省对抗网络的神经图像编辑(Neural Photo Editing with Introspective Adversarial Networks)2016
    原文链接:[https://arxiv.org/pdf/1609.07093.pdf]
  19. 基于解码器的生成模型的定量分析(On the Quantitative Analysis of Decoder-Based Generative Models )2016
    原文链接:[https://arxiv.org/pdf/1611.04273.pdf]
  20. 结合生成对抗网络和Actor-Critic 方法(Connecting Generative Adversarial Networks and Actor-Critic Methods)2016
    原文链接:[https://arxiv.org/pdf/1610.01945.pdf]
  21. 通过对抗网络使用模拟和非监督图像训练( Learning from Simulated and Unsupervised Images through Adversarial Training)2016
    原文链接:[https://arxiv.org/pdf/1612.07828.pdf]
  22. 基于上下文RNN-GANs的抽象推理图的生成(Contextual RNN-GANs for Abstract Reasoning Diagram Generation)2016
    原文链接:[https://arxiv.org/pdf/1609.09444.pdf]
  23. 生成多对抗网络(Generative Multi-Adversarial Networks)2016
    原文链接:[https://arxiv.org/pdf/1611.01673.pdf]
  24. 生成对抗网络组合(Ensembles of Generative Adversarial Network)2016
    原文链接:[https://arxiv.org/pdf/1612.00991.pdf]
  25. 改进生成器目标的GANs(Improved generator objectives for GANs) 2016
    原文链接:[https://arxiv.org/pdf/1612.02780.pdf]

2017

  1. 训练生成对抗网络的基本方法(Towards Principled Methods for Training Generative Adversarial Networks)2017
    原文链接:[https://arxiv.org/pdf/1701.04862.pdf]
  2. 生成对抗模型的隐向量精准修复(Precise Recovery of Latent Vectors from Generative Adversarial Networks)2017
    原文链接:[https://openreview.NET/pdf?id=HJC88BzFl]
  3. 生成混合模型(Generative Mixture of Networks)2017
    原文链接:[https://arxiv.org/pdf/1702.03307.pdf]
  4. 记忆生成时空模型(Generative Temporal Models with Memory)2017
    原文链接:[https://arxiv.org/pdf/1702.04649.pdf]
  5. 止GAN暴力:生成性非对抗模型(Stopping GAN Violence: Generative Unadversarial Networks)2017
    原文链接:[https://arxiv.org/pdf/1703.02528.pdf]
  6. 贝叶斯GANs:贝叶斯与GAN结合(Bayesian GAN) 原文链接:https://arxiv.org/abs/1705.09558

初步版本,水平有限,有错误或者不完善的地方,欢迎大家提建议和补充,会一直保持更新,敬请关注http://www.zhuanzhi.ai 和关注专知公众号,获取第一手AI相关知识

VIP内容

通过学习可观测数据的概率密度而随机生成样本的生成模型在近年来受到人们的广泛关注, 网络结构中包含多个隐藏层的深度生成式模型以更出色的生成能力成为研究热点, 深度生成模型在计算机视觉、密度估计、自然语言和语音识别、半监督学习等领域得到成功应用, 并给无监督学习提供了良好的范式. 本文根据深度生成模型处理似然函数的不同方法将模型分为三类: 第一类方法是近似方法, 包括采用抽样方法近似计算似然函数的受限玻尔兹曼机(Restricted Boltzmann machine, RBM)和以受限玻尔兹曼机为基础模块的深度置信网络(Deep belief network, DBN)、深度玻尔兹曼机(Deep Boltzmann machines, DBM)和亥姆霍兹机, 与之对应的另一种模型是直接优化似然函数变分下界的变分自编码器以及其重要的改进模型, 包括重要性加权自编码和可用于半监督学习的深度辅助深度模型; 第二类方法是避开求极大似然过程的隐式方法, 其代表模型是通过生成器和判别器之间的对抗行为来优化模型参数从而巧妙避开求解似然函数的生成对抗网络以及重要的改进模型, 包括WGAN、深度卷积生成对抗网络和当前最顶级的深度生成模型BigGAN; 第三类方法是对似然函数进行适当变形的流模型和自回归模型, 流模型利用可逆函数构造似然函数后直接优化模型参数, 包括以NICE为基础的常规流模型、变分流模型和可逆残差网络(i-ResNet), 自回归模型(NADE)将目标函数分解为条件概率乘积的形式, 包括神经自回归密度估计(NADE)、像素循环神经网络(PixelRNN)、掩码自编码器(MADE)以及WaveNet等. 详细描述上述模型的原理和结构以及模型变形后, 阐述各个模型的研究进展和应用, 最后对深度生成式模型进行展望和总结.

http://www.aas.net.cn/cn/article/doi/10.16383/j.aas.c190866

成为VIP会员查看完整内容
0
27

最新内容

Even though image generation with Generative Adversarial Networks has been showing remarkable ability to generate high-quality images, GANs do not always guarantee photorealistic images will be generated. Sometimes they generate images that have defective or unnatural objects, which are referred to as 'artifacts'. Research to determine why the artifacts emerge and how they can be detected and removed has not been sufficiently carried out. To analyze this, we first hypothesize that rarely activated neurons and frequently activated neurons have different purposes and responsibilities for the progress of generating images. By analyzing the statistics and the roles for those neurons, we empirically show that rarely activated neurons are related to failed results of making diverse objects and lead to artifacts. In addition, we suggest a correction method, called 'sequential ablation', to repair the defective part of the generated images without complex computational cost and manual efforts.

0
0
下载
预览

最新论文

Even though image generation with Generative Adversarial Networks has been showing remarkable ability to generate high-quality images, GANs do not always guarantee photorealistic images will be generated. Sometimes they generate images that have defective or unnatural objects, which are referred to as 'artifacts'. Research to determine why the artifacts emerge and how they can be detected and removed has not been sufficiently carried out. To analyze this, we first hypothesize that rarely activated neurons and frequently activated neurons have different purposes and responsibilities for the progress of generating images. By analyzing the statistics and the roles for those neurons, we empirically show that rarely activated neurons are related to failed results of making diverse objects and lead to artifacts. In addition, we suggest a correction method, called 'sequential ablation', to repair the defective part of the generated images without complex computational cost and manual efforts.

0
0
下载
预览
Top
微信扫码咨询专知VIP会员