机器学习的一个分支,它基于试图使用包含复杂结构或由多重非线性变换构成的多个处理层对数据进行高层抽象的一系列算法。

Deep Learning 深度学习 专知荟萃

入门学习

  1. 《一天搞懂深度学习》台大 李宏毅 300页PPT
  2. Deep Learning(深度学习)学习笔记整理系列之(1-8)
  3. 深层学习为何要“Deep”(上,下)
  4. 《神经网络与深度学习》 作者:邱锡鹏 中文图书 2017
  5. 深度学习基础 206页PPT 邱锡鹏 复旦大学 2017年8月17日
    - [http://nlp.fudan.edu.cn/xpqiu/slides/20170817-CIPS-ATT-DL.pdf]
  6. 《Neural Networks and Deep Learning》 By Michael Nielsen / Aug 2017
    - 原文:[http://neuralnetworksanddeeplearning.com/index.html]

综述

  1. LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Deep learning." Nature 521.7553 (2015): 436-444. (Three Giants' Survey)
  2.  Representation Learning: A Review and New Perspectives, Yoshua Bengio, Aaron Courville, Pascal Vincent, Arxiv, 2012.

进阶文章

Deep Belief Network(DBN)(Milestone of Deep Learning Eve)

  1. Hinton, Geoffrey E., Simon Osindero, and Yee-Whye Teh. "A fast learning algorithm for deep belief nets." Neural computation 18.7 (2006): 1527-1554.
  2. Hinton, Geoffrey E., and Ruslan R. Salakhutdinov. "Reducing the dimensionality of data with neural networks." Science 313.5786 (2006): 504-507.

ImageNet Evolution(Deep Learning broke out from here)

  1. Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems. 2012.
  2. Simonyan, Karen, and Andrew Zisserman. "Very deep convolutional networks for large-scale image recognition." arXiv preprint arXiv:1409.1556 (2014).
  3. Szegedy, Christian, et al. "Going deeper with convolutions." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.
  4. He, Kaiming, et al. "Deep residual learning for image recognition." arXiv preprint arXiv:1512.03385 (2015).

1.4 Speech Recognition Evolution

  1. Hinton, Geoffrey, et al. "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups." IEEE Signal Processing Magazine 29.6 (2012): 82-97.
  2. Graves, Alex, Abdel-rahman Mohamed, and Geoffrey Hinton. "Speech recognition with deep recurrent neural networks." 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013.
  3. Graves, Alex, and Navdeep Jaitly. "Towards End-To-End Speech Recognition with Recurrent Neural Networks." ICML. Vol. 14. 2014.
  4. Sak, Haşim, et al. "Fast and accurate recurrent neural network acoustic models for speech recognition." arXiv preprint arXiv:1507.06947 (2015).
  5. W. Xiong, J. Droppo, X. Huang, F. Seide, M. Seltzer, A. Stolcke, D. Yu, G. Zweig "Achieving Human Parity in Conversational Speech Recognition." arXiv preprint arXiv:1610.05256 (2016).

Model

  1. Hinton, Geoffrey E., et al. "Improving neural networks by preventing co-adaptation of feature detectors." arXiv preprint arXiv:1207.0580 (2012).
  2. Srivastava, Nitish, et al. "Dropout: a simple way to prevent neural networks from overfitting." Journal of Machine Learning Research 15.1 (2014): 1929-1958.
  3. Ioffe, Sergey, and Christian Szegedy. "Batch normalization: Accelerating deep network training by reducing internal covariate shift." arXiv preprint arXiv:1502.03167 (2015). [http://arxiv.org/pdf/1502.03167] An outstanding Work in 2015
  4. Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. "Layer normalization." arXiv preprint arXiv:1607.06450 (2016).
  5. Courbariaux, Matthieu, et al. "Binarized Neural Networks: Training Neural Networks with Weights and Activations Constrained to+ 1 or−1."
  6. Jaderberg, Max, et al. "Decoupled neural interfaces using synthetic gradients." arXiv preprint arXiv:1608.05343 (2016).
  7. Chen, Tianqi, Ian Goodfellow, and Jonathon Shlens. "Net2net: Accelerating learning via knowledge transfer." arXiv preprint arXiv:1511.05641 (2015).
  8. Wei, Tao, et al. "Network Morphism." arXiv preprint arXiv:1603.01670 (2016).

Optimizationz

  1. Sutskever, Ilya, et al. "On the importance of initialization and momentum in deep learning." ICML (3) 28 (2013): 1139-1147.
  2. Kingma, Diederik, and Jimmy Ba. "Adam: A method for stochastic optimization." arXiv preprint arXiv:1412.6980 (2014).
  3. Andrychowicz, Marcin, et al. "Learning to learn by gradient descent by gradient descent." arXiv preprint arXiv:1606.04474 (2016).
  4. Han, Song, Huizi Mao, and William J. Dally. "Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding." CoRR, abs/1510.00149 2 (2015).
  5. Iandola, Forrest N., et al. "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 1MB model size." arXiv preprint arXiv:1602.07360 (2016).

Unsupervised Learning / Deep Generative Model

  1. Le, Quoc V. "Building high-level features using large scale unsupervised learning." 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013.

  2. Kingma, Diederik P., and Max Welling. "Auto-encoding variational bayes." arXiv preprint arXiv:1312.6114 (2013).

  3. Goodfellow, Ian, et al. "Generative adversarial nets." Advances in Neural Information Processing Systems. 2014.

  4. Radford, Alec, Luke Metz, and Soumith Chintala. "Unsupervised representation learning with deep convolutional generative adversarial networks." arXiv preprint arXiv:1511.06434 (2015).

  5. Gregor, Karol, et al. "DRAW: A recurrent neural network for image generation." arXiv preprint arXiv:1502.04623 (2015).

  6. Oord, Aaron van den, Nal Kalchbrenner, and Koray Kavukcuoglu. "Pixel recurrent neural networks." arXiv preprint arXiv:1601.06759 (2016).

  7. Oord, Aaron van den, et al. "Conditional image generation with PixelCNN decoders." arXiv preprint arXiv:1606.05328 (2016).

RNN / Sequence-to-Sequence Model

  1. Graves, Alex. "Generating sequences with recurrent neural networks." arXiv preprint arXiv:1308.0850 (2013).
  2. Cho, Kyunghyun, et al. "Learning phrase representations using RNN encoder-decoder for statistical machine translation." arXiv preprint arXiv:1406.1078 (2014).
  3. Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. "Sequence to sequence learning with neural networks." Advances in neural information processing systems. 2014.
  4. Bahdanau, Dzmitry, KyungHyun Cho, and Yoshua Bengio. "Neural Machine Translation by Jointly Learning to Align and Translate." arXiv preprint arXiv:1409.0473 (2014).
  5. Vinyals, Oriol, and Quoc Le. "A neural conversational model." arXiv preprint arXiv:1506.05869 (2015).

Neural Turing Machine

  1. Graves, Alex, Greg Wayne, and Ivo Danihelka. "Neural turing machines." arXiv preprint arXiv:1410.5401 (2014).
  2. Zaremba, Wojciech, and Ilya Sutskever. "Reinforcement learning neural Turing machines." arXiv preprint arXiv:1505.00521 362 (2015).
  3. Weston, Jason, Sumit Chopra, and Antoine Bordes. "Memory networks." arXiv preprint arXiv:1410.3916 (2014).
  4. Sukhbaatar, Sainbayar, Jason Weston, and Rob Fergus. "End-to-end memory networks." Advances in neural information processing systems. 2015.
  5. Vinyals, Oriol, Meire Fortunato, and Navdeep Jaitly. "Pointer networks." Advances in Neural Information Processing Systems. 2015.
  6. Graves, Alex, et al. "Hybrid computing using a neural network with dynamic external memory." Nature (2016).

Deep Reinforcement Learning

  1. Mnih, Volodymyr, et al. "Playing atari with deep reinforcement learning." arXiv preprint arXiv:1312.5602 (2013).
  2. Mnih, Volodymyr, et al. "Human-level control through deep reinforcement learning." Nature 518.7540 (2015): 529-533.
  3. Wang, Ziyu, Nando de Freitas, and Marc Lanctot. "Dueling network architectures for deep reinforcement learning." arXiv preprint arXiv:1511.06581 (2015).
  4. Mnih, Volodymyr, et al. "Asynchronous methods for deep reinforcement learning." arXiv preprint arXiv:1602.01783 (2016).
  5. Lillicrap, Timothy P., et al. "Continuous control with deep reinforcement learning." arXiv preprint arXiv:1509.02971 (2015).
  6. Gu, Shixiang, et al. "Continuous Deep Q-Learning with Model-based Acceleration." arXiv preprint arXiv:1603.00748 (2016). [http://arxiv.org/pdf/1603.00748) (NAF) ]
  7. Schulman, John, et al. "Trust region policy optimization." CoRR, abs/1502.05477 (2015).
  8. Silver, David, et al. "Mastering the game of Go with deep neural networks and tree search." Nature 529.7587 (2016): 484-489.

Deep Transfer Learning / Lifelong Learning / especially for RL

  1. Bengio, Yoshua. "Deep Learning of Representations for Unsupervised and Transfer Learning." ICML Unsupervised and Transfer Learning 27 (2012): 17-36.
  2. Silver, Daniel L., Qiang Yang, and Lianghao Li. "Lifelong Machine Learning Systems: Beyond Learning Algorithms." AAAI Spring Symposium: Lifelong Machine Learning. 2013.
  3. Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean. "Distilling the knowledge in a neural network." arXiv preprint arXiv:1503.02531 (2015).
  4. Rusu, Andrei A., et al. "Policy distillation." arXiv preprint arXiv:1511.06295 (2015).
  5. Parisotto, Emilio, Jimmy Lei Ba, and Ruslan Salakhutdinov. "Actor-mimic: Deep multitask and transfer reinforcement learning." arXiv preprint arXiv:1511.06342 (2015).
  6. Rusu, Andrei A., et al. "Progressive neural networks." arXiv preprint arXiv:1606.04671 (2016).

One Shot Deep Learning

  1. Lake, Brenden M., Ruslan Salakhutdinov, and Joshua B. Tenenbaum. "Human-level concept learning through probabilistic program induction." Science 350.6266 (2015): 1332-1338.
  2. Koch, Gregory, Richard Zemel, and Ruslan Salakhutdinov. "Siamese Neural Networks for One-shot Image Recognition."(2015)
  3. Santoro, Adam, et al. "One-shot Learning with Memory-Augmented Neural Networks." arXiv preprint arXiv:1605.06065 (2016).
  4. Vinyals, Oriol, et al. "Matching Networks for One Shot Learning." arXiv preprint arXiv:1606.04080 (2016).
  5. Hariharan, Bharath, and Ross Girshick. "Low-shot visual object recognition." arXiv preprint arXiv:1606.02819 (2016).

NLP(Natural Language Processing)

  1. Antoine Bordes, et al. "Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing." AISTATS(2012)
  2. Mikolov, et al. "Distributed representations of words and phrases and their compositionality." ANIPS(2013): 3111-3119
  3. Sutskever, et al. "“Sequence to sequence learning with neural networks." ANIPS(2014)
  4. Ankit Kumar, et al. "“Ask Me Anything: Dynamic Memory Networks for Natural Language Processing." arXiv preprint arXiv:1506.07285(2015)
  5. Yoon Kim, et al. "Character-Aware Neural Language Models." NIPS(2015) arXiv preprint arXiv:1508.06615(2015)
  6. Jason Weston, et al. "Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks." arXiv preprint arXiv:1502.05698(2015)
  7. Karl Moritz Hermann, et al. "Teaching Machines to Read and Comprehend." arXiv preprint arXiv:1506.03340(2015)
  8. Alexis Conneau, et al. "Very Deep Convolutional Networks for Natural Language Processing." arXiv preprint arXiv:1606.01781(2016)
  9. Armand Joulin, et al. "Bag of Tricks for Efficient Text Classification." arXiv preprint arXiv:1607.01759(2016)

Object Detection

  1. Szegedy, Christian, Alexander Toshev, and Dumitru Erhan. "Deep neural networks for object detection." Advances in Neural Information Processing Systems. 2013.
  2. Girshick, Ross, et al. "Rich feature hierarchies for accurate object detection and semantic segmentation." Proceedings of the IEEE conference on computer vision and pattern recognition. 2014.
  3. He, Kaiming, et al. "Spatial pyramid pooling in deep convolutional networks for visual recognition." European Conference on Computer Vision. Springer International Publishing, 2014.
  4. Girshick, Ross. "Fast r-cnn." Proceedings of the IEEE International Conference on Computer Vision. 2015.
  5. Ren, Shaoqing, et al. "Faster R-CNN: Towards real-time object detection with region proposal networks." Advances in neural information processing systems. 2015.
  6. Redmon, Joseph, et al. "You only look once: Unified, real-time object detection." arXiv preprint arXiv:1506.02640 (2015).
  7. Liu, Wei, et al. "SSD: Single Shot MultiBox Detector." arXiv preprint arXiv:1512.02325 (2015).
  8. Dai, Jifeng, et al. "R-FCN: Object Detection viaRegion-based Fully Convolutional Networks." arXiv preprint arXiv:1605.06409 (2016).
  9. He, Gkioxari, et al. "Mask R-CNN" ICCV2017 Best Paper(2017).

Visual Tracking

  1. Wang, Naiyan, and Dit-Yan Yeung. "Learning a deep compact image representation for visual tracking." Advances in neural information processing systems. 2013.
  2. Wang, Naiyan, et al. "Transferring rich feature hierarchies for robust visual tracking." arXiv preprint arXiv:1501.04587 (2015).
  3. Wang, Lijun, et al. "Visual tracking with fully convolutional networks." Proceedings of the IEEE International Conference on Computer Vision. 2015.
  4. Held, David, Sebastian Thrun, and Silvio Savarese. "Learning to Track at 100 FPS with Deep Regression Networks." arXiv preprint arXiv:1604.01802 (2016).
  5. Bertinetto, Luca, et al. "Fully-Convolutional Siamese Networks for Object Tracking." arXiv preprint arXiv:1606.09549 (2016).
  6. Martin Danelljan, Andreas Robinson, Fahad Khan, Michael Felsberg. "Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking." ECCV (2016)
  7. Nam, Hyeonseob, Mooyeol Baek, and Bohyung Han. "Modeling and Propagating CNNs in a Tree Structure for Visual Tracking." arXiv preprint arXiv:1608.07242 (2016).

Image Caption

  1. Farhadi,Ali,etal. "Every picture tells a story: Generating sentences from images". In Computer VisionECCV 2010. Springer Berlin Heidelberg:15-29, 2010.
  2. Kulkarni, Girish, et al. "Baby talk: Understanding and generating image descriptions". In Proceedings of the 24th CVPR, 2011.
  3. Vinyals, Oriol, et al. "Show and tell: A neural image caption generator". In arXiv preprint arXiv:1411.4555, 2014.
  4. Donahue, Jeff, et al. "Long-term recurrent convolutional networks for visual recognition and description". In arXiv preprint arXiv:1411.4389 ,2014.
  5. Karpathy, Andrej, and Li Fei-Fei. "Deep visual-semantic alignments for generating image descriptions". In arXiv preprint arXiv:1412.2306, 2014.
  6. Karpathy, Andrej, Armand Joulin, and Fei Fei F. Li. "Deep fragment embeddings for bidirectional image sentence mapping". In Advances in neural information processing systems, 2014.
  7. Fang, Hao, et al. "From captions to visual concepts and back". In arXiv preprint arXiv:1411.4952, 2014.
  8. Chen, Xinlei, and C. Lawrence Zitnick. "Learning a recurrent visual representation for image caption generation". In arXiv preprint arXiv:1411.5654, 2014.
  9. Mao, Junhua, et al. "Deep captioning with multimodal recurrent neural networks (m-rnn)". In arXiv preprint arXiv:1412.6632, 2014.
  10. Xu, Kelvin, et al. "Show, attend and tell: Neural image caption generation with visual attention". In arXiv preprint arXiv:1502.03044, 2015.

Machine Translation

  1. Luong, Minh-Thang, et al. "Addressing the rare word problem in neural machine translation." arXiv preprint arXiv:1410.8206 (2014).
  2. Sennrich, et al. "Neural Machine Translation of Rare Words with Subword Units". In arXiv preprint arXiv:1508.07909, 2015.
  3. Luong, Minh-Thang, Hieu Pham, and Christopher D. Manning. "Effective approaches to attention-based neural machine translation." arXiv preprint arXiv:1508.04025 (2015).
  4. Chung, et al. "A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation". In arXiv preprint arXiv:1603.06147, 2016.
  5. Lee, et al. "Fully Character-Level Neural Machine Translation without Explicit Segmentation". In arXiv preprint arXiv:1610.03017, 2016.
  6. Wu, Schuster, Chen, Le, et al. "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation". In arXiv preprint arXiv:1609.08144v2, 2016.

Robotics

  1. Koutník, Jan, et al. "Evolving large-scale neural networks for vision-based reinforcement learning." Proceedings of the 15th annual conference on Genetic and evolutionary computation. ACM, 2013.
  2. Levine, Sergey, et al. "End-to-end training of deep visuomotor policies." Journal of Machine Learning Research 17.39 (2016): 1-40.
  3. Pinto, Lerrel, and Abhinav Gupta. "Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours." arXiv preprint arXiv:1509.06825 (2015).
  4. Zhu, Yuke, et al. "Target-driven Visual Navigation in Indoor Scenes using Deep Reinforcement Learning." arXiv preprint arXiv:1609.05143 (2016).
  5. Yahya, Ali, et al. "Collective Robot Reinforcement Learning with Distributed Asynchronous Guided Policy Search." arXiv preprint arXiv:1610.00673 (2016).
  6. Gu, Shixiang, et al. "Deep Reinforcement Learning for Robotic Manipulation." arXiv preprint arXiv:1610.00633 (2016).
  7. A Rusu, M Vecerik, Thomas Rothörl, N Heess, R Pascanu, R Hadsell."Sim-to-Real Robot Learning from Pixels with Progressive Nets." arXiv preprint arXiv:1610.04286 (2016).
  8. Mirowski, Piotr, et al. "Learning to navigate in complex environments." arXiv preprint arXiv:1611.03673 (2016).

Object Segmentation

  1. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation.” in CVPR, 2015.
  2. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. "Semantic image segmentation with deep convolutional nets and fully connected crfs." In ICLR, 2015.
  3. Pinheiro, P.O., Collobert, R., Dollar, P. "Learning to segment object candidates." In: NIPS. 2015.
  4. Dai, J., He, K., Sun, J. "Instance-aware semantic segmentation via multi-task network cascades." in CVPR. 2016
  5. Dai, J., He, K., Sun, J. "Instance-sensitive Fully Convolutional Networks." arXiv preprint arXiv:1603.08678 (2016).

Tutorial

  1. UFLDL Tutorial 1
  2. UFLDL Tutorial 2
  3. Deep Learning for NLP (without Magic)
  4. A Deep Learning Tutorial: From Perceptrons to Deep Networks
  5. Deep Learning from the Bottom up
  6. Theano Tutorial
  7. Neural Networks for Matlab
  8. Using convolutional neural nets to detect facial keypoints tutorial
  9. Pytorch Tutorials
  10. The Best Machine Learning Tutorials On The Web
  11. VGG Convolutional Neural Networks Practical
  12. TensorFlow tutorials
  13. More TensorFlow tutorials
  14. TensorFlow Python Notebooks
  15. Keras and Lasagne Deep Learning Tutorials
  16. Classification on raw time series in TensorFlow with a LSTM RNN
  17. Using convolutional neural nets to detect facial keypoints tutorial
  18. TensorFlow-World
  19. Deep Learning NIPS’2015 Tutorial Geoff Hinton, Yoshua Bengio & Yann LeCun 深度学习三巨头共同主持

视频教程

Courses

  1. Machine Learning - Stanford
  2. Machine Learning - Caltech
  3. Machine Learning - Carnegie Mellon
  4. Neural Networks for Machine Learning
  5. Neural networks class
  6. Deep Learning Course
  7. A.I - Berkeley
  8. A.I - MIT
  9. Vision and learning - computers and brains
  10. Convolutional Neural Networks for Visual Recognition - Stanford
  11. Convolutional Neural Networks for Visual Recognition - Stanford
  12. Deep Learning for Natural Language Processing - Stanford
  13. Neural Networks - usherbrooke
  14. Machine Learning - Oxford
  15. Deep Learning - Nvidia
  16. Graduate Summer School: Deep Learning, Feature Learning
  17. Deep Learning - Udacity/Google
  18. Deep Learning - UWaterloo
  19. Statistical Machine Learning - CMU
  20. Deep Learning Course
  21. Bay area DL school
    • [http://www.bayareadlschool.org/] by Andrew Ng, Yoshua Bengio, Samy Bengio, Andrej Karpathy, Richard Socher, Hugo Larochelle and many others @ Stanford, CA (2016)
  22. Designing, Visualizing and Understanding Deep Neural Networks-UC Berkeley
  23. UVA Deep Learning Course
  24. MIT 6.S094: Deep Learning for Self-Driving Cars
  25. MIT 6.S191: Introduction to Deep Learning
  26. Berkeley CS 294: Deep Reinforcement Learning
  27. [Keras in Motion video course
  28. Practical Deep Learning For Coders

Videos and Lectures

  1. How To Create A Mind
  2. Deep Learning, Self-Taught Learning and Unsupervised Feature Learning
  3. Recent Developments in Deep Learning
  4. The Unreasonable Effectiveness of Deep Learning
  5. Deep Learning of Representations
  6. Principles of Hierarchical Temporal Memory
  7. Machine Learning Discussion Group - Deep Learning w/ Stanford AI Lab
  8. Making Sense of the World with Deep Learning
  9. Demystifying Unsupervised Feature Learning
  10. Visual Perception with Deep Learning
  11. The Next Generation of Neural Networks
  12. The wonderful and terrifying implications of computers that can learn
  13. Unsupervised Deep Learning - Stanford
  14. Natural Language Processing
  15. A beginners Guide to Deep Neural Networks
  16. Deep Learning: Intelligence from Big Data
  17. Introduction to Artificial Neural Networks and Deep Learning
  18. NIPS 2016 lecture and workshop videos

代码

  1. Caffe
  2. Torch7
  3. Theano
  4. cuda-convnet
  5. convetjs
  6. Ccv
  7. NuPIC -[http://numenta.org/nupic.html]
  8. DeepLearning4J
  9. Brain
  10. DeepLearnToolbox
  11. Deepnet
  12. Deeppy -[https://github.com/andersbll/deeppy]
  13. JavaNN
  14. hebel
  15. Mocha.jl
  16. OpenDL
  17. cuDNN
  18. MGL
  19. Knet.jl
  20. Nvidia DIGITS - a web app based on Caffe
  21. Neon - Python based Deep Learning Framework
  22. Keras - Theano based Deep Learning Library
  23. Chainer - A flexible framework of neural networks for deep learning
  24. RNNLM Toolkit
  25. RNNLIB - A recurrent neural network library
  26. char-rnn
  27. MatConvNet: CNNs for MATLAB
  28. Minerva - a fast and flexible tool for deep learning on multi-GPU
  29. Brainstorm - Fast, flexible and fun neural networks.
  30. Tensorflow - Open source software library for numerical computation using data flow graphs
  31. DMTK - Microsoft Distributed Machine Learning Tookit
  32. Scikit Flow - Simplified interface for TensorFlow [mimicking Scikit Learn]
  33. MXnet - Lightweight, Portable, Flexible Distributed/Mobile Deep Learning framework
  34. Veles - Samsung Distributed machine learning platform
  35. Marvin - A Minimalist GPU-only N-Dimensional ConvNets Framework
  36. [https://github.com/PrincetonVision/marvin]
  37. Apache SINGA - A General Distributed Deep Learning Platform
  38. DSSTNE - Amazon's library for building Deep Learning models
  39. SyntaxNet - Google's syntactic parser - A TensorFlow dependency library
  40. mlpack - A scalable Machine Learning library
  41. Torchnet - Torch based Deep Learning Library
  42. Paddle - PArallel Distributed Deep LEarning by Baidu
  43. NeuPy - Theano based Python library for ANN and Deep Learning
  44. Lasagne - a lightweight library to build and train neural networks in Theano
  45. nolearn - wrappers and abstractions around existing neural network libraries, most notably Lasagne
  46. [https://github.com/dnouri/nolearn]
  47. Sonnet - a library for constructing neural networks by Google's DeepMind
  48. PyTorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
  49. CNTK - Microsoft Cognitive Toolkit

  50. Aaron Courville

  51. Abdel-rahman Mohamed

  52. Adam Coates

  53. Alex Acero

  54. Alex Krizhevsky

  55. Alexander Ilin

  56. Amos Storkey

  57. Andrej Karpathy

  58. Andrew M. Saxe

  59. Andrew Ng

  60. Andrew W. Senior

  61. Andriy Mnih

  62. Ayse Naz Erkan

  63. Benjamin Schrauwen

  64. Bernardete Ribeiro

  65. Bo David Chen

  66. Boureau Y-Lan

  67. Brian Kingsbury

  68. Christopher Manning

  69. Clement Farabet

  70. Dan Claudiu Cireșan

  71. David Reichert

  72. Derek Rose

  73. Dong Yu

  74. Drausin Wulsin

  75. Erik M. Schmidt

  76. Eugenio Culurciello

  77. Frank Seide

  78. Galen Andrew

  79. Geoffrey Hinton

  80. George Dahl

  81. Graham Taylor

  82. Grégoire Montavon

  83. Guido Francisco Montúfar

  84. Guillaume Desjardins

  85. Hannes Schulz

  86. Hélène Paugam-Moisy

  87. Honglak Lee

  88. Hugo Larochelle

  89. Ilya Sutskever

  90. Itamar Arel

  91. James Martens

  92. Jason Morton

  93. Jason Weston

  94. Jeff Dean

  95. Jiquan Mgiam

  96. Joseph Turian

  97. Joshua Matthew Susskind

  98. Jürgen Schmidhuber

  99. Justin A. Blanco

  100. Koray Kavukcuoglu

  101. KyungHyun Cho

  102. Li Deng

  103. Lucas Theis

  104. Ludovic Arnold

  105. Marc'Aurelio Ranzato

  106. Martin Längkvist

  107. Misha Denil

  108. Mohammad Norouzi

  109. Nando de Freitas

  110. Navdeep Jaitly

  111. Nicolas Le Roux

  112. Nitish Srivastava

  113. Noel Lopes

  114. Oriol Vinyals

  115. Pascal Vincent

  116. Patrick Nguyen

  117. Pedro Domingos

  118. Peggy Series

  119. Pierre Sermanet

  120. Piotr Mirowski

  121. Quoc V. Le

  122. Reinhold Scherer

  123. Richard Socher

  124. Rob Fergus

  125. Robert Coop

  126. Robert Gens

  127. Roger Grosse

  128. Ronan Collobert

  129. Ruslan Salakhutdinov

  130. Sebastian Gerwinn

  131. Stéphane Mallat

  132. Sven Behnke

  133. Tapani Raiko

  134. Tara Sainath

  135. Tijmen Tieleman

  136. Tom Karnowski

  137. Tomáš Mikolov

  138. Ueli Meier

  139. Vincent Vanhoucke

  140. Volodymyr Mnih

  141. Yann LeCun

  142. Yichuan Tang

  143. Yoshua Bengio

  144. Yotaro Kubo

  145. Youzhi [Will] Zou

  146. Fei-Fei Li

  147. Ian Goodfellow

  148. Robert Laganière

重要网站收藏

  1. deeplearning.net
  2. deeplearning.stanford.edu
  3. nlp.stanford.edu
  4. ai-junkie.com
  5. cs.brown.edu/research/ai
  6. eecs.umich.edu/ai
  7. cs.utexas.edu/users/ai-lab
  8. cs.washington.edu/research/ai
  9. aiai.ed.ac.uk
  10. www-aig.jpl.nasa.gov
  11. csail.mit.edu
  12. cgi.cse.unsw.edu.au/~aishare
  13. cs.rochester.edu/research/ai
  14. ai.sri.com
  15. isi.edu/AI/isd.htm
  16. nrl.navy.mil/itd/aic
  17. hips.seas.harvard.edu
  18. AI Weekly
  19. stat.ucla.edu
  20. deeplearning.cs.toronto.edu
  21. jeffdonahue.com/lrcn/
  22. visualqa.org
  23. www.mpi-inf.mpg.de/departments/computer-vision...
  24. Deep Learning News
  25. Machine Learning is Fun! Adam Geitgey's Blog

免费在线图书

  1. Deep Learning
  2. Neural Networks and Deep Learning
  3. Deep Learning
  4. Deep Learning Tutorial
  5. neuraltalk
  6. An introduction to genetic algorithms
  7. Artificial Intelligence: A Modern Approach
  8. Deep Learning in Neural Networks: An Overview

Datasets

  1. MNIST
  2. Google House Numbers
  3. CIFAR-10 and CIFAR-100
  4. IMAGENET
  5. Tiny Images
  6. Flickr Data
  7. Berkeley Segmentation Dataset 500
  8. UC Irvine Machine Learning Repository
  9. Flickr 8k
  10. Flickr 30k
  11. Microsoft COCO
  12. VQA
  13. Image QA
  14. AT&T Laboratories Cambridge face database
  15. AVHRR Pathfinder
  16. Air Freight
    • [http://www.anc.ed.ac.uk/~amos/afreightdata.html] - The Air Freight data set is a ray-traced image sequence along with ground truth segmentation based on textural characteristics. [455 images + GT, each 160x120 pixels]. [Formats: PNG]
  17. Amsterdam Library of Object Images
    • [http://www.science.uva.nl/~aloi/] - ALOI is a color image collection of one-thousand small objects, recorded for scientific purposes. In order to capture the sensory variation in object recordings, we systematically varied viewing angle, illumination angle, and illumination color for each object, and additionally captured wide-baseline stereo images. We recorded over a hundred images of each object, yielding a total of 110,250 images for the collection. [Formats: png]
  18. Annotated face, hand, cardiac & meat images
    • [http://www.imm.dtu.dk/~aam/] - Most images & annotations are supplemented by various ASM/AAM analyses using the AAM-API. [Formats: bmp,asf]
  19. Image Analysis and Computer Graphics
  20. Brown University Stimuli
  21. CAVIAR video sequences of mall and public space behavior
  22. Machine Vision Unit
  23. CCITT Fax standard images
  24. CMU CIL's Stereo Data with Ground Truth[cil-ster.html] - 3 sets of 11 images, including color tiff images with spectroradiometry [Formats: gif, tiff]
  25. CMU PIE Database
  26. CMU VASC Image Database
  27. Caltech Image Database
  28. Columbia-Utrecht Reflectance and Texture Database
    • [http://www.cs.columbia.edu/CAVE/curet/] - Texture and reflectance measurements for over 60 samples of 3D texture, observed with over 200 different combinations of viewing and illumination directions. [Formats: bmp]
  29. Computational Colour Constancy Data
    • [http://www.cs.sfu.ca/~colour/data/index.html] - A dataset oriented towards computational color constancy, but useful for computer vision in general. It includes synthetic data, camera sensor data, and over 700 images. [Formats: tiff]
  30. Computational Vision Lab
  31. Content-based image retrieval database
  32. Efficient Content-based Retrieval Group
  33. Densely Sampled View Spheres
  34. Computer Science VII [Graphical Systems]
  35. Digital Embryos
  36. Univerity of Minnesota Vision Lab
  37. El Salvador Atlas of Gastrointestinal VideoEndoscopy
  38. FG-NET Facial Aging Database
  39. FVC2000 Fingerprint Databases
    • [http://bias.csr.unibo.it/fvc2000/] - FVC2000 is the First International Competition for Fingerprint Verification Algorithms. Four fingerprint databases constitute the FVC2000 benchmark [3520 fingerprints in all].
  40. Biometric Systems Lab
  41. Face and Gesture images and image sequences
    • [http://www.fg-net.org] - Several image datasets of faces and gestures that are ground truth annotated for benchmarking
  42. German Fingerspelling Database
  43. Language Processing and Pattern Recognition
  44. Groningen Natural Image Database
  45. ICG Testhouse sequence
  46. Institute of Computer Graphics and Vision
  47. IEN Image Library
  48. INRIA's Syntim images database
  49. INRIA
  50. INRIA's Syntim stereo databases
  51. Image Analysis Laboratory
  52. Image Analysis Laboratory
  53. Image Database
  54. JAFFE Facial Expression Image Database
    • [http://www.mis.atr.co.jp/~mlyons/jaffe.html] - The JAFFE database consists of 213 images of Japanese female subjects posing 6 basic facial expressions as well as a neutral pose. Ratings on emotion adjectives are also available, free of charge, for research purposes. [Formats: TIFF Grayscale images.]
  55. ATR Research, Kyoto, Japan
  56. JISCT Stereo Evaluation
    • [ftp://ftp.vislist.com/IMAGERY/JISCT/] - 44 image pairs. These data have been used in an evaluation of stereo analysis, as described in the April 1993 ARPA Image Understanding Workshop paper ``The JISCT Stereo Evaluation'' by R.C.Bolles, H.H.Baker, and M.J.Hannah, 263--274 [Formats: SSI]
  57. MIT Vision Texture
  58. MIT face images and more
  59. Machine Vision
  60. Mammography Image Databases
  61. ftp://ftp.cps.msu.edu/pub/prip
  62. Middlebury Stereo Data Sets with Ground Truth
    • [http://www.middlebury.edu/stereo/data.html] - Six multi-frame stereo data sets of scenes containing planar regions. Each data set contains 9 color images and subpixel-accuracy ground-truth data. [Formats: ppm]
  63. Middlebury Stereo Vision Research Page
  64. Modis Airborne simulator, Gallery and data set
  65. NIST Fingerprint and handwriting
  66. NIST Fingerprint data
  67. NLM HyperDoc Visible Human Project
  68. National Design Repository
    • [http://www.designrepository.org] - Over 55,000 3D CAD and solid models of [mostly] mechanical/machined engineerign designs. [Formats: gif,vrml,wrl,stp,sat]
  69. Geometric & Intelligent Computing Laboratory
  70. OSU [MSU] 3D Object Model Database
  71. OSU [MSU/WSU] Range Image Database
  72. OSU/SAMPL Database: Range Images, 3D Models, Stills, Motion Sequences
  73. Signal Analysis and Machine Perception Laboratory
  74. Otago Optical Flow Evaluation Sequences
  75. Vision Research Group
  76. ftp://ftp.limsi.fr/pub/quenot/opflow/testdata/piv/
    • [ftp://ftp.limsi.fr/pub/quenot/opflow/testdata/piv/] - Real and synthetic image sequences used for testing a Particle Image Velocimetry application. These images may be used for the test of optical flow and image matching algorithms. [Formats: pgm [raw]]
  77. LIMSI-CNRS/CHM/IMM/vision
  78. LIMSI-CNRS
  79. Photometric 3D Surface Texture Database
  80. SEQUENCES FOR OPTICAL FLOW ANALYSIS [SOFA]
    • [http://www.cee.hw.ac.uk/~mtc/sofa] - 9 synthetic sequences designed for testing motion analysis applications, including full ground truth of motion and camera parameters. [Formats: gif]
  81. Computer Vision Group
  82. Sequences for Flow Based Reconstruction
  83. Stereo Images with Ground Truth Disparity and Occlusion
    • [http://www-dbv.cs.uni-bonn.de/stereo_data/] - a small set of synthetic images of a hallway with varying amounts of noise added. Use these images to benchmark your stereo algorithm. [Formats: raw, viff [khoros], or tiff]
  84. Stuttgart Range Image Database
  85. Department Image Understanding
  86. The AR Face Database
  87. Purdue Robot Vision Lab
  88. The MIT-CSAIL Database of Objects and Scenes
    • [http://web.mit.edu/torralba/www/database.html] - Database for testing multiclass object detection and scene recognition algorithms. Over 72,000 images with 2873 annotated frames. More than 50 annotated object classes. [Formats: jpg]
  89. The RVL SPEC-DB [SPECularity DataBase]
    • [http://rvl1.ecn.purdue.edu/RVL/specularity_database/] - A collection of over 300 real images of 100 objects taken under three different illuminaiton conditions [Diffuse/Ambient/Directed]. -- Use these images to test algorithms for detecting and compensating specular highlights in color images. [Formats: TIFF ]
  90. Robot Vision Laboratory
  91. The Xm2vts database
    • [http://xm2vtsdb.ee.surrey.ac.uk] - The XM2VTSDB contains four digital recordings of 295 people taken over a period of four months. This database contains both image and video data of faces.
  92. Centre for Vision, Speech and Signal Processing
  93. Traffic Image Sequences and 'Marbled Block' Sequence
  94. IAKS/KOGS
  95. U Bern Face images
  96. U Michigan textures
  97. U Oulu wood and knots database
  98. UCID - an Uncompressed Colour Image Database
  99. UMass Vision Image Archive
  100. UNC's 3D image database
  101. USF Range Image Data with Segmentation Ground Truth
  102. University of Oulu Physics-based Face Database
  103. Machine Vision and Media Processing Unit
  104. University of Oulu Texture Database
    • [http://www.outex.oulu.fi] - Database of 320 surface textures, each captured under three illuminants, six spatial resolutions and nine rotation angles. A set of test suites is also provided so that texture segmentation, classification, and retrieval algorithms can be tested in a standard manner. [Formats: bmp, ras, xv]
  105. Machine Vision Group
  106. Usenix face database
  107. View Sphere Database
  108. PRIMA, GRAVIR
  109. Vision-list Imagery Archive
  110. Wiry Object Recognition Database
    • [http://www.cs.cmu.edu/~owenc/word.htm] - Thousands of images of a cart, ladder, stool, bicycle, chairs, and cluttered scenes with ground truth labelings of edges and regions. [Formats: jpg]
  111. 3D Vision Group
  112. Yale Face Database
  113. Yale Face Database B
  114. Center for Computational Vision and Control
  115. DeepMind QA Corpus
  116. YouTube-8M Dataset
    • [https://research.google.com/youtube8m/] - YouTube-8M is a large-scale labeled video dataset that consists of 8 million YouTube video IDs and associated labels from a diverse vocabulary of 4800 visual entities.
  117. Open Images dataset

汇总不全面,欢迎补全和提建议,敬请关注http://www.zhuanzhi.ai 和关注专知公众号,获取第一手AI相关知识

成为VIP会员查看完整内容
Top