生成对抗网络 (Generative Adversarial Network, GAN) 是一类神经网络,通过轮流训练判别器 (Discriminator) 和生成器 (Generator),令其相互对抗,来从复杂概率分布中采样,例如生成图片、文字、语音等。GAN 最初由 Ian Goodfellow 提出,原论文见 Generative Adversarial Networks

知识荟萃

生成对抗网络(GAN)专知荟萃

一、理论学习

  1. 训练GANs的技巧    
    

参考链接:[http://papers.nips.cc/paper/6124-improved-techniques-for-training-gans.pdf] 2. Energy-Based GANs 以及Yann Le Cun 的相关研究
参考链接:[http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf] 3. 模式正则化GAN
参考链接:[https://arxiv.org/pdf/1612.02136.pdf] 4. 最新NIPS2016也有最新的关于训练GAN模型的总结
参考链接:[https://github.com/soumith/ganhacks] 5. The GAN Zoo千奇百怪的生成对抗网络,都在这里了。你没看错,里面已经有有近百个了。
参考链接: [https://github.com/hindupuravinash/the-gan-zoo]

二、综述

1.中科院自动化所 中文综述 《生成式对抗网络 GAN 的研究进展与展望》
参考链接:[https://pan.baidu.com/s/1dEMITo9] 密码: qqcc

三、报告

  1. Ian Goodfellow的GANs报告ICCV 2017
    参考链接:[https://pan.baidu.com/s/1bpIZvfL]
  2. Ian Goodfellow的GANs报告ICCV 2017的中文讲稿
    参考链接:[https://mp.weixin.qq.com/s/nPBFrnO3_QJjAzm37G5ceQ]
  3. Ian Goodfellow的GANs报告NIPS 2016
    参考链接:[http://www.iangoodfellow.com/slides/2016-12-04-NIPS.pdf]
  4. Ian Goodfellow的GANs报告NIPS 2016 的中文讲稿
    参考链接:[http://www.sohu.com/a/121189842_465975]
  5. Russ Salakhutdinov的深度生成模型
    参考链接:[http://www.cs.toronto.edu/~rsalakhu/talk_Montreal_2016_Salakhutdinov.pdf]

四、教程

  1. NIPS 2016教程:生成对抗网络     
    

参考链接:[https://arxiv.org/pdf/1701.00160.pdf] 2. 训练GANs的技巧和窍门
参考链接:[https://github.com/soumith/ganhacks] 3. OpenAI生成模型
参考链接:[https://blog.openai.com/generative-models/] 4. 用Keras实现MNIST生成对抗模型
参考链接:[https://oshearesearch.com/index.PHP/2016/07/01/mnist-generative-adversarial-model-in-keras/] 5. 用深度学习TensorFlow实现图像修复
参考链接:[http://bamos.github.io/2016/08/09/deep-completion/]

五、中文博客资料

1.生成对抗网络初学入门:一文读懂GAN的基本原理
[http://www.xtecher.com/Xfeature/view?aid=7496]
2.深入浅出:GAN原理与应用入门介绍
[https://zhuanlan.zhihu.com/p/28731033]
3.港理工在读博士李嫣然深入浅出GAN之应用篇
参考链接:链接: [https://pan.baidu.com/s/1o8n4UDk] 密码: 78wt
4.萌物生成器:如何使用四种GAN制造猫图
参考链接:[https://zhuanlan.zhihu.com/p/27769807]
5.GAN学习指南:从原理入门到制作生成Demo
参考链接:[https://zhuanlan.zhihu.com/p/24767059x]
6.生成式对抗网络GAN研究进展
参考链接:[http://blog.csdn.net/solomon1558/article/details/52537114]
7.生成对抗网络(GAN)的前沿进展(论文、报告、框架和Github资源)汇总
参考链接:[http://blog.csdn.net/love666666shen/article/details/74953970]

六、Github资源以及模型

  1. 深度卷积生成对抗模型(DCGAN)   
    

参考链接:[https://github.com/Newmu/dcgan_code]
2. TensorFlow实现深度卷积生成对抗模型(DCGAN)
参考链接:[https://github.com/carpedm20/DCGAN-tensorflow]
3. Torch实现深度卷积生成对抗模型(DCGAN)
参考链接:[https://github.com/soumith/dcgan.torch]
4. Keras实现深度卷积生成对抗模型(DCGAN)
参考链接:[https://github.com/jacobgil/keras-dcgan]
5. 使用神经网络生成自然图像(Facebook的Eyescream项目)
参考链接:[https://github.com/facebook/eyescream]
6. 对抗自编码(AdversarialAutoEncoder)
参考链接:[https://github.com/musyoku/adversarial-autoencoder]
7. 利用ThoughtVectors 实现文本到图像的合成
参考链接:[https://github.com/paarthneekhara/text-to-image]
8. 对抗样本生成器(Adversarialexample generator)
参考链接:[https://github.com/e-lab/torch-toolbox/tree/master/Adversarial]
9. 深度生成模型的半监督学习
参考链接:[https://github.com/dpkingma/nips14-ssl]
10. GANs的训练方法
参考链接:[https://github.com/openai/improved-gan]
11. 生成式矩匹配网络(Generative Moment Matching Networks, GMMNs)
参考链接:[https://github.com/yujiali/gmmn]
12. 对抗视频生成
参考链接:[https://github.com/dyelax/Adversarial_Video_Generation]
13. 基于条件对抗网络的图像到图像翻译(pix2pix)
参考链接:[https://github.com/phillipi/pix2pix]
14. 对抗机器学习库Cleverhans,
参考链接:[https://github.com/openai/cleverhans]

七、最新研究论文

2014

  1. 对抗实例的解释和利用(Explaining and Harnessing Adversarial Examples)2014
    原文链接:[https://arxiv.org/pdf/1412.6572.pdf]
  2. 基于深度生成模型的半监督学习( Semi-Supervised Learning with Deep Generative Models )2014
    原文链接:[https://arxiv.org/pdf/1406.5298v2.pdf]
  3. 条件生成对抗网络(Conditional Generative Adversarial Nets)2014
    原文链接:[https://arxiv.org/pdf/1411.1784v1.pdf]

2015

  1. 基于深度卷积生成对抗网络的无监督学习(Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks (DCGANs))2015
    原文链接:[https://arxiv.org/pdf/1511.06434v2.pdf]
  2. 基于拉普拉斯金字塔生成式对抗网络的深度图像生成模型(Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks)2015
    原文链接:[http://papers.nips.cc/paper/5773-deep-generative-image-models-using-a-5. laplacian-pyramid-of-adversarial-networks.pdf]
  3. 生成式矩匹配网络(Generative Moment Matching Networks)2015
    原文链接:[http://proceedings.mlr.press/v37/li15.pdf]
  4. 超越均方误差的深度多尺度视频预测(Deep multi-scale video prediction beyond mean square error)2015
    原文链接:[https://arxiv.org/pdf/1511.05440.pdf]
  5. 通过学习相似性度量的超像素自编码(Autoencoding beyond pixels using a learned similarity metric)2015
    原文链接:[https://arxiv.org/pdf/1512.09300.pdf]
  6. 对抗自编码(Adversarial Autoencoders)2015
    原文链接:[https://arxiv.org/pdf/1511.05644.pdf]
  7. 基于像素卷积神经网络的条件生成图片(Conditional Image Generation with PixelCNN Decoders)2015
    原文链接:[https://arxiv.org/pdf/1606.05328.pdf]
  8. 通过平均差异最大优化训练生成神经网络(Training generative neural networks via Maximum Mean Discrepancy optimization)2015
    原文链接:[https://arxiv.org/pdf/1505.03906.pdf]

2016

  1. 训练GANs的一些技巧(Improved Techniques for Training GANs)2016
    原文链接:[https://arxiv.org/pdf/1606.03498v1.pdf]
  2. InfoGAN:基于信息最大化GANs的可解释表达学习(InfoGAN:Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets)2016
    原文链接:[https://arxiv.org/pdf/1606.03657v1.pdf]
    3.上下文像素编码:通过修复进行特征学习(Context Encoders: Feature Learning by Inpainting)2016 原文链接: [http://www.cvfoundation.org/openaccess/content_cvpr_2016/papers/Pathak_Context_Encoders_Feature_CVPR_2016_paper.pdf]
  3. 生成对抗网络实现文本合成图像(Generative Adversarial Text to Image Synthesis)2016
    原文链接:[http://proceedings.mlr.press/v48/reed16.pdf]
  4. 对抗特征学习(Adversarial Feature Learning)2016
    原文链接:[https://arxiv.org/pdf/1605.09782.pdf]
  5. 结合逆自回归流的变分推理(Improving Variational Inference with Inverse Autoregressive Flow )2016
    原文链接: [https://papers.nips.cc/paper/6581-improving-variational-autoencoders-with-inverse-autoregressive-flow.pdf]
  6. 深度学习系统对抗样本黑盒攻击(Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples)2016
    原文链接:[https://arxiv.org/pdf/1602.02697.pdf]
  7. 参加,推断,重复:基于生成模型的快速场景理解(Attend, infer, repeat: Fast scene understanding with generative models)2016
    原文链接:[https://arxiv.org/pdf/1603.08575.pdf]
  8. f-GAN: 使用变分散度最小化训练生成神经采样器(f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization )2016
    原文链接: [http://papers.nips.cc/paper/6066-tagger-deep-unsupervised-perceptual-grouping.pdf]
    10.在自然图像流形上的生成视觉操作(Generative Visual Manipulation on the Natural Image Manifold)2016
    原文链接:[https://arxiv.org/pdf/1609.03552.pdf]
  9. 对抗性推断学习(Adversarially Learned Inference)2016
    原文链接:[https://arxiv.org/pdf/1606.00704.pdf]
  10. 基于循环对抗网络的图像生成(Generating images with recurrent adversarial networks)2016
    原文链接:[https://arxiv.org/pdf/1602.05110.pdf]
  11. 生成对抗模仿学习(Generative Adversarial Imitation Learning)2016
    原文链接:[http://papers.nips.cc/paper/6391-generative-adversarial-imitation-learning.pdf]
  12. 基于3D生成对抗模型学习物体形状的概率隐空间(Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling)2016
    原文链接:[https://arxiv.org/pdf/1610.07584.pdf]
  13. 学习画画(Learning What and Where to Draw)2016
    原文链接:[https://arxiv.org/pdf/1610.02454v1.pdf]
  14. 基于辅助分类器GANs的条件图像合成(Conditional Image Synthesis with Auxiliary Classifier GANs)2016
    原文链接:[https://arxiv.org/pdf/1610.09585.pdf]
  15. 隐生成模型的学习(Learning in Implicit Generative Models)2016
    原文:[https://arxiv.org/pdf/1610.03483.pdf]
  16. VIME: 变分信息最大化探索(VIME: Variational Information Maximizing Exploration)2016
    原文链接: [http://papers.nips.cc/paper/6591-vime-variational-information-maximizing-exploration.pdf]
  17. 生成对抗网络的展开(Unrolled Generative Adversarial Networks)2016
    原文链接:[https://arxiv.org/pdf/1611.02163.pdf]
  18. 基于内省对抗网络的神经图像编辑(Neural Photo Editing with Introspective Adversarial Networks)2016
    原文链接:[https://arxiv.org/pdf/1609.07093.pdf]
  19. 基于解码器的生成模型的定量分析(On the Quantitative Analysis of Decoder-Based Generative Models )2016
    原文链接:[https://arxiv.org/pdf/1611.04273.pdf]
  20. 结合生成对抗网络和Actor-Critic 方法(Connecting Generative Adversarial Networks and Actor-Critic Methods)2016
    原文链接:[https://arxiv.org/pdf/1610.01945.pdf]
  21. 通过对抗网络使用模拟和非监督图像训练( Learning from Simulated and Unsupervised Images through Adversarial Training)2016
    原文链接:[https://arxiv.org/pdf/1612.07828.pdf]
  22. 基于上下文RNN-GANs的抽象推理图的生成(Contextual RNN-GANs for Abstract Reasoning Diagram Generation)2016
    原文链接:[https://arxiv.org/pdf/1609.09444.pdf]
  23. 生成多对抗网络(Generative Multi-Adversarial Networks)2016
    原文链接:[https://arxiv.org/pdf/1611.01673.pdf]
  24. 生成对抗网络组合(Ensembles of Generative Adversarial Network)2016
    原文链接:[https://arxiv.org/pdf/1612.00991.pdf]
  25. 改进生成器目标的GANs(Improved generator objectives for GANs) 2016
    原文链接:[https://arxiv.org/pdf/1612.02780.pdf]

2017

  1. 训练生成对抗网络的基本方法(Towards Principled Methods for Training Generative Adversarial Networks)2017
    原文链接:[https://arxiv.org/pdf/1701.04862.pdf]
  2. 生成对抗模型的隐向量精准修复(Precise Recovery of Latent Vectors from Generative Adversarial Networks)2017
    原文链接:[https://openreview.NET/pdf?id=HJC88BzFl]
  3. 生成混合模型(Generative Mixture of Networks)2017
    原文链接:[https://arxiv.org/pdf/1702.03307.pdf]
  4. 记忆生成时空模型(Generative Temporal Models with Memory)2017
    原文链接:[https://arxiv.org/pdf/1702.04649.pdf]
  5. 止GAN暴力:生成性非对抗模型(Stopping GAN Violence: Generative Unadversarial Networks)2017
    原文链接:[https://arxiv.org/pdf/1703.02528.pdf]
  6. 贝叶斯GANs:贝叶斯与GAN结合(Bayesian GAN) 原文链接:https://arxiv.org/abs/1705.09558

初步版本,水平有限,有错误或者不完善的地方,欢迎大家提建议和补充,会一直保持更新,敬请关注http://www.zhuanzhi.ai 和关注专知公众号,获取第一手AI相关知识

前往荟萃

VIP内容

简介: 生成对抗网络(GANs)是最近的热门研究主题。自2014年以来,人们对GAN进行了广泛的研究,并且提出了许多算法。但是,很少有全面的研究来解释不同GAN变体之间的联系以及它们是如何演变的。在本文中,我们尝试从算法,理论和应用的角度对各种GAN方法进行叙述。首先,详细介绍了大多数GAN算法的动机,数学表示形式和结构。此外,GAN已与其他机器学习算法结合用于特定应用,例如半监督学习,迁移学习和强化学习。本文比较了这些GAN方法的共性和差异。其次,研究了与GAN相关的理论问题。第三,说明了GAN在图像处理和计算机视觉,自然语言处理,音乐,语音和音频,医学领域以及数据科学中的典型应用。最后,指出了GAN未来的开放研究问题。

目录:

成为VIP会员查看完整内容
2+
0+
更多VIP内容

最新内容

Generative models, such as GANs, learn an explicit low-dimensional representation of a particular class of images, and so they may be used as natural image priors for solving inverse problems such as image restoration and compressive sensing. GAN priors have demonstrated impressive performance on these tasks, but they can exhibit substantial representation error for both in-distribution and out-of-distribution images, because of the mismatch between the learned, approximate image distribution and the data generating distribution. In this paper, we demonstrate a method for reducing the representation error of GAN priors by modeling images as the linear combination of a GAN prior with a Deep Decoder. The deep decoder is an underparameterized and most importantly unlearned natural signal model similar to the Deep Image Prior. No knowledge of the specific inverse problem is needed in the training of the GAN underlying our method. For compressive sensing and image superresolution, our hybrid model exhibits consistently higher PSNRs than both the GAN priors and Deep Decoder separately, both on in-distribution and out-of-distribution images. This model provides a method for extensibly and cheaply leveraging both the benefits of learned and unlearned image recovery priors in inverse problems.

0+
0+
下载
预览
更多最新内容

最新论文

Generative models, such as GANs, learn an explicit low-dimensional representation of a particular class of images, and so they may be used as natural image priors for solving inverse problems such as image restoration and compressive sensing. GAN priors have demonstrated impressive performance on these tasks, but they can exhibit substantial representation error for both in-distribution and out-of-distribution images, because of the mismatch between the learned, approximate image distribution and the data generating distribution. In this paper, we demonstrate a method for reducing the representation error of GAN priors by modeling images as the linear combination of a GAN prior with a Deep Decoder. The deep decoder is an underparameterized and most importantly unlearned natural signal model similar to the Deep Image Prior. No knowledge of the specific inverse problem is needed in the training of the GAN underlying our method. For compressive sensing and image superresolution, our hybrid model exhibits consistently higher PSNRs than both the GAN priors and Deep Decoder separately, both on in-distribution and out-of-distribution images. This model provides a method for extensibly and cheaply leveraging both the benefits of learned and unlearned image recovery priors in inverse problems.

0+
0+
下载
预览
更多最新论文
Top