机器学习的一个分支,它基于试图使用包含复杂结构或由多重非线性变换构成的多个处理层对数据进行高层抽象的一系列算法。

Deep Learning 深度学习 专知荟萃

入门学习

  1. 《一天搞懂深度学习》台大 李宏毅 300页PPT
  2. Deep Learning(深度学习)学习笔记整理系列之(1-8)
  3. 深层学习为何要“Deep”(上,下)
  4. 《神经网络与深度学习》 作者:邱锡鹏 中文图书 2017
  5. 深度学习基础 206页PPT 邱锡鹏 复旦大学 2017年8月17日 - [http://nlp.fudan.edu.cn/xpqiu/slides/20170817-CIPS-ATT-DL.pdf
  6. 《Neural Networks and Deep Learning》 By Michael Nielsen / Aug 2017

进阶文章

Deep Belief Network(DBN)(Milestone of Deep Learning Eve)

  1. Hinton, Geoffrey E., Simon Osindero, and Yee-Whye Teh. "A fast learning algorithm for deep belief nets." Neural computation 18.7 (2006): 1527-1554.
  2. Hinton, Geoffrey E., and Ruslan R. Salakhutdinov. "Reducing the dimensionality of data with neural networks." Science 313.5786 (2006): 504-507.

ImageNet Evolution(Deep Learning broke out from here)

  1. Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems. 2012.
  2. Simonyan, Karen, and Andrew Zisserman. "Very deep convolutional networks for large-scale image recognition." arXiv preprint arXiv:1409.1556 (2014).
  3. Szegedy, Christian, et al. "Going deeper with convolutions." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.
  4. He, Kaiming, et al. "Deep residual learning for image recognition." arXiv preprint arXiv:1512.03385 (2015).

1.4 Speech Recognition Evolution

  1. Hinton, Geoffrey, et al. "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups." IEEE Signal Processing Magazine 29.6 (2012): 82-97.
  2. Graves, Alex, Abdel-rahman Mohamed, and Geoffrey Hinton. "Speech recognition with deep recurrent neural networks." 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013.
  3. Graves, Alex, and Navdeep Jaitly. "Towards End-To-End Speech Recognition with Recurrent Neural Networks." ICML. Vol. 14. 2014.
  4. Sak, Haşim, et al. "Fast and accurate recurrent neural network acoustic models for speech recognition." arXiv preprint arXiv:1507.06947 (2015).
  5. W. Xiong, J. Droppo, X. Huang, F. Seide, M. Seltzer, A. Stolcke, D. Yu, G. Zweig "Achieving Human Parity in Conversational Speech Recognition." arXiv preprint arXiv:1610.05256 (2016).

Model

  1. Hinton, Geoffrey E., et al. "Improving neural networks by preventing co-adaptation of feature detectors." arXiv preprint arXiv:1207.0580 (2012).
  2. Srivastava, Nitish, et al. "Dropout: a simple way to prevent neural networks from overfitting." Journal of Machine Learning Research 15.1 (2014): 1929-1958.
  3. Ioffe, Sergey, and Christian Szegedy. "Batch normalization: Accelerating deep network training by reducing internal covariate shift." arXiv preprint arXiv:1502.03167 (2015). [http://arxiv.org/pdf/1502.03167] An outstanding Work in 2015
  4. Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. "Layer normalization." arXiv preprint arXiv:1607.06450 (2016).
  5. Courbariaux, Matthieu, et al. "Binarized Neural Networks: Training Neural Networks with Weights and Activations Constrained to+ 1 or−1."
  6. Jaderberg, Max, et al. "Decoupled neural interfaces using synthetic gradients." arXiv preprint arXiv:1608.05343 (2016).
  7. Chen, Tianqi, Ian Goodfellow, and Jonathon Shlens. "Net2net: Accelerating learning via knowledge transfer." arXiv preprint arXiv:1511.05641 (2015).
  8. Wei, Tao, et al. "Network Morphism." arXiv preprint arXiv:1603.01670 (2016).

Optimizationz

  1. Sutskever, Ilya, et al. "On the importance of initialization and momentum in deep learning." ICML (3) 28 (2013): 1139-1147.
  2. Kingma, Diederik, and Jimmy Ba. "Adam: A method for stochastic optimization." arXiv preprint arXiv:1412.6980 (2014).
  3. Andrychowicz, Marcin, et al. "Learning to learn by gradient descent by gradient descent." arXiv preprint arXiv:1606.04474 (2016).
  4. Han, Song, Huizi Mao, and William J. Dally. "Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding." CoRR, abs/1510.00149 2 (2015).
  5. Iandola, Forrest N., et al. "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 1MB model size." arXiv preprint arXiv:1602.07360 (2016).

Unsupervised Learning / Deep Generative Model

  1. Le, Quoc V. "Building high-level features using large scale unsupervised learning." 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013.

  2. Kingma, Diederik P., and Max Welling. "Auto-encoding variational bayes." arXiv preprint arXiv:1312.6114 (2013).

  3. Goodfellow, Ian, et al. "Generative adversarial nets." Advances in Neural Information Processing Systems. 2014.

  4. Radford, Alec, Luke Metz, and Soumith Chintala. "Unsupervised representation learning with deep convolutional generative adversarial networks." arXiv preprint arXiv:1511.06434 (2015).

  5. Gregor, Karol, et al. "DRAW: A recurrent neural network for image generation." arXiv preprint arXiv:1502.04623 (2015).

  6. Oord, Aaron van den, Nal Kalchbrenner, and Koray Kavukcuoglu. "Pixel recurrent neural networks." arXiv preprint arXiv:1601.06759 (2016).

  7. Oord, Aaron van den, et al. "Conditional image generation with PixelCNN decoders." arXiv preprint arXiv:1606.05328 (2016).

RNN / Sequence-to-Sequence Model

  1. Graves, Alex. "Generating sequences with recurrent neural networks." arXiv preprint arXiv:1308.0850 (2013).
  2. Cho, Kyunghyun, et al. "Learning phrase representations using RNN encoder-decoder for statistical machine translation." arXiv preprint arXiv:1406.1078 (2014).
  3. Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. "Sequence to sequence learning with neural networks." Advances in neural information processing systems. 2014.
  4. Bahdanau, Dzmitry, KyungHyun Cho, and Yoshua Bengio. "Neural Machine Translation by Jointly Learning to Align and Translate." arXiv preprint arXiv:1409.0473 (2014).
  5. Vinyals, Oriol, and Quoc Le. "A neural conversational model." arXiv preprint arXiv:1506.05869 (2015).

Neural Turing Machine

  1. Graves, Alex, Greg Wayne, and Ivo Danihelka. "Neural turing machines." arXiv preprint arXiv:1410.5401 (2014).
  2. Zaremba, Wojciech, and Ilya Sutskever. "Reinforcement learning neural Turing machines." arXiv preprint arXiv:1505.00521 362 (2015).
  3. Weston, Jason, Sumit Chopra, and Antoine Bordes. "Memory networks." arXiv preprint arXiv:1410.3916 (2014).
  4. Sukhbaatar, Sainbayar, Jason Weston, and Rob Fergus. "End-to-end memory networks." Advances in neural information processing systems. 2015.
  5. Vinyals, Oriol, Meire Fortunato, and Navdeep Jaitly. "Pointer networks." Advances in Neural Information Processing Systems. 2015.
  6. Graves, Alex, et al. "Hybrid computing using a neural network with dynamic external memory." Nature (2016).

Deep Reinforcement Learning

  1. Mnih, Volodymyr, et al. "Playing atari with deep reinforcement learning." arXiv preprint arXiv:1312.5602 (2013).
  2. Mnih, Volodymyr, et al. "Human-level control through deep reinforcement learning." Nature 518.7540 (2015): 529-533.
  3. Wang, Ziyu, Nando de Freitas, and Marc Lanctot. "Dueling network architectures for deep reinforcement learning." arXiv preprint arXiv:1511.06581 (2015).
  4. Mnih, Volodymyr, et al. "Asynchronous methods for deep reinforcement learning." arXiv preprint arXiv:1602.01783 (2016).
  5. Lillicrap, Timothy P., et al. "Continuous control with deep reinforcement learning." arXiv preprint arXiv:1509.02971 (2015).
  6. Gu, Shixiang, et al. "Continuous Deep Q-Learning with Model-based Acceleration." arXiv preprint arXiv:1603.00748 (2016). [http://arxiv.org/pdf/1603.00748) (NAF) ]
  7. Schulman, John, et al. "Trust region policy optimization." CoRR, abs/1502.05477 (2015).
  8. Silver, David, et al. "Mastering the game of Go with deep neural networks and tree search." Nature 529.7587 (2016): 484-489.

Deep Transfer Learning / Lifelong Learning / especially for RL

  1. Bengio, Yoshua. "Deep Learning of Representations for Unsupervised and Transfer Learning." ICML Unsupervised and Transfer Learning 27 (2012): 17-36.
  2. Silver, Daniel L., Qiang Yang, and Lianghao Li. "Lifelong Machine Learning Systems: Beyond Learning Algorithms." AAAI Spring Symposium: Lifelong Machine Learning. 2013.
  3. Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean. "Distilling the knowledge in a neural network." arXiv preprint arXiv:1503.02531 (2015).
  4. Rusu, Andrei A., et al. "Policy distillation." arXiv preprint arXiv:1511.06295 (2015).
  5. Parisotto, Emilio, Jimmy Lei Ba, and Ruslan Salakhutdinov. "Actor-mimic: Deep multitask and transfer reinforcement learning." arXiv preprint arXiv:1511.06342 (2015).
  6. Rusu, Andrei A., et al. "Progressive neural networks." arXiv preprint arXiv:1606.04671 (2016).

One Shot Deep Learning

  1. Lake, Brenden M., Ruslan Salakhutdinov, and Joshua B. Tenenbaum. "Human-level concept learning through probabilistic program induction." Science 350.6266 (2015): 1332-1338.
  2. Koch, Gregory, Richard Zemel, and Ruslan Salakhutdinov. "Siamese Neural Networks for One-shot Image Recognition."(2015)
  3. Santoro, Adam, et al. "One-shot Learning with Memory-Augmented Neural Networks." arXiv preprint arXiv:1605.06065 (2016).
  4. Vinyals, Oriol, et al. "Matching Networks for One Shot Learning." arXiv preprint arXiv:1606.04080 (2016).
  5. Hariharan, Bharath, and Ross Girshick. "Low-shot visual object recognition." arXiv preprint arXiv:1606.02819 (2016).

NLP(Natural Language Processing)

  1. Antoine Bordes, et al. "Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing." AISTATS(2012)
  2. Mikolov, et al. "Distributed representations of words and phrases and their compositionality." ANIPS(2013): 3111-3119
  3. Sutskever, et al. "“Sequence to sequence learning with neural networks." ANIPS(2014)
  4. Ankit Kumar, et al. "“Ask Me Anything: Dynamic Memory Networks for Natural Language Processing." arXiv preprint arXiv:1506.07285(2015)
  5. Yoon Kim, et al. "Character-Aware Neural Language Models." NIPS(2015) arXiv preprint arXiv:1508.06615(2015)
  6. Jason Weston, et al. "Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks." arXiv preprint arXiv:1502.05698(2015)
  7. Karl Moritz Hermann, et al. "Teaching Machines to Read and Comprehend." arXiv preprint arXiv:1506.03340(2015)
  8. Alexis Conneau, et al. "Very Deep Convolutional Networks for Natural Language Processing." arXiv preprint arXiv:1606.01781(2016)
  9. Armand Joulin, et al. "Bag of Tricks for Efficient Text Classification." arXiv preprint arXiv:1607.01759(2016)

Object Detection

  1. Szegedy, Christian, Alexander Toshev, and Dumitru Erhan. "Deep neural networks for object detection." Advances in Neural Information Processing Systems. 2013.
  2. Girshick, Ross, et al. "Rich feature hierarchies for accurate object detection and semantic segmentation." Proceedings of the IEEE conference on computer vision and pattern recognition. 2014.
  3. He, Kaiming, et al. "Spatial pyramid pooling in deep convolutional networks for visual recognition." European Conference on Computer Vision. Springer International Publishing, 2014.
  4. Girshick, Ross. "Fast r-cnn." Proceedings of the IEEE International Conference on Computer Vision. 2015.
  5. Ren, Shaoqing, et al. "Faster R-CNN: Towards real-time object detection with region proposal networks." Advances in neural information processing systems. 2015.
  6. Redmon, Joseph, et al. "You only look once: Unified, real-time object detection." arXiv preprint arXiv:1506.02640 (2015).
  7. Liu, Wei, et al. "SSD: Single Shot MultiBox Detector." arXiv preprint arXiv:1512.02325 (2015).
  8. Dai, Jifeng, et al. "R-FCN: Object Detection viaRegion-based Fully Convolutional Networks." arXiv preprint arXiv:1605.06409 (2016).
  9. He, Gkioxari, et al. "Mask R-CNN" ICCV2017 Best Paper(2017).

Visual Tracking

  1. Wang, Naiyan, and Dit-Yan Yeung. "Learning a deep compact image representation for visual tracking." Advances in neural information processing systems. 2013.
  2. Wang, Naiyan, et al. "Transferring rich feature hierarchies for robust visual tracking." arXiv preprint arXiv:1501.04587 (2015).
  3. Wang, Lijun, et al. "Visual tracking with fully convolutional networks." Proceedings of the IEEE International Conference on Computer Vision. 2015.
  4. Held, David, Sebastian Thrun, and Silvio Savarese. "Learning to Track at 100 FPS with Deep Regression Networks." arXiv preprint arXiv:1604.01802 (2016).
  5. Bertinetto, Luca, et al. "Fully-Convolutional Siamese Networks for Object Tracking." arXiv preprint arXiv:1606.09549 (2016).
  6. Martin Danelljan, Andreas Robinson, Fahad Khan, Michael Felsberg. "Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking." ECCV (2016)
  7. Nam, Hyeonseob, Mooyeol Baek, and Bohyung Han. "Modeling and Propagating CNNs in a Tree Structure for Visual Tracking." arXiv preprint arXiv:1608.07242 (2016).

Image Caption

  1. Farhadi,Ali,etal. "Every picture tells a story: Generating sentences from images". In Computer VisionECCV 2010. Springer Berlin Heidelberg:15-29, 2010.
  2. Kulkarni, Girish, et al. "Baby talk: Understanding and generating image descriptions". In Proceedings of the 24th CVPR, 2011.
  3. Vinyals, Oriol, et al. "Show and tell: A neural image caption generator". In arXiv preprint arXiv:1411.4555, 2014.
  4. Donahue, Jeff, et al. "Long-term recurrent convolutional networks for visual recognition and description". In arXiv preprint arXiv:1411.4389 ,2014.
  5. Karpathy, Andrej, and Li Fei-Fei. "Deep visual-semantic alignments for generating image descriptions". In arXiv preprint arXiv:1412.2306, 2014.
  6. Karpathy, Andrej, Armand Joulin, and Fei Fei F. Li. "Deep fragment embeddings for bidirectional image sentence mapping". In Advances in neural information processing systems, 2014.
  7. Fang, Hao, et al. "From captions to visual concepts and back". In arXiv preprint arXiv:1411.4952, 2014.
  8. Chen, Xinlei, and C. Lawrence Zitnick. "Learning a recurrent visual representation for image caption generation". In arXiv preprint arXiv:1411.5654, 2014.
  9. Mao, Junhua, et al. "Deep captioning with multimodal recurrent neural networks (m-rnn)". In arXiv preprint arXiv:1412.6632, 2014.
  10. Xu, Kelvin, et al. "Show, attend and tell: Neural image caption generation with visual attention". In arXiv preprint arXiv:1502.03044, 2015.

Machine Translation

  1. Luong, Minh-Thang, et al. "Addressing the rare word problem in neural machine translation." arXiv preprint arXiv:1410.8206 (2014).
  2. Sennrich, et al. "Neural Machine Translation of Rare Words with Subword Units". In arXiv preprint arXiv:1508.07909, 2015.
  3. Luong, Minh-Thang, Hieu Pham, and Christopher D. Manning. "Effective approaches to attention-based neural machine translation." arXiv preprint arXiv:1508.04025 (2015).
  4. Chung, et al. "A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation". In arXiv preprint arXiv:1603.06147, 2016.
  5. Lee, et al. "Fully Character-Level Neural Machine Translation without Explicit Segmentation". In arXiv preprint arXiv:1610.03017, 2016.
  6. Wu, Schuster, Chen, Le, et al. "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation". In arXiv preprint arXiv:1609.08144v2, 2016.

Robotics

  1. Koutník, Jan, et al. "Evolving large-scale neural networks for vision-based reinforcement learning." Proceedings of the 15th annual conference on Genetic and evolutionary computation. ACM, 2013.
  2. Levine, Sergey, et al. "End-to-end training of deep visuomotor policies." Journal of Machine Learning Research 17.39 (2016): 1-40.
  3. Pinto, Lerrel, and Abhinav Gupta. "Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours." arXiv preprint arXiv:1509.06825 (2015).
  4. Zhu, Yuke, et al. "Target-driven Visual Navigation in Indoor Scenes using Deep Reinforcement Learning." arXiv preprint arXiv:1609.05143 (2016).
  5. Yahya, Ali, et al. "Collective Robot Reinforcement Learning with Distributed Asynchronous Guided Policy Search." arXiv preprint arXiv:1610.00673 (2016).
  6. Gu, Shixiang, et al. "Deep Reinforcement Learning for Robotic Manipulation." arXiv preprint arXiv:1610.00633 (2016).
  7. A Rusu, M Vecerik, Thomas Rothörl, N Heess, R Pascanu, R Hadsell."Sim-to-Real Robot Learning from Pixels with Progressive Nets." arXiv preprint arXiv:1610.04286 (2016).
  8. Mirowski, Piotr, et al. "Learning to navigate in complex environments." arXiv preprint arXiv:1611.03673 (2016).

Object Segmentation

  1. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation.” in CVPR, 2015.
  2. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. "Semantic image segmentation with deep convolutional nets and fully connected crfs." In ICLR, 2015.
  3. Pinheiro, P.O., Collobert, R., Dollar, P. "Learning to segment object candidates." In: NIPS. 2015.
  4. Dai, J., He, K., Sun, J. "Instance-aware semantic segmentation via multi-task network cascades." in CVPR. 2016
  5. Dai, J., He, K., Sun, J. "Instance-sensitive Fully Convolutional Networks." arXiv preprint arXiv:1603.08678 (2016).

综述

  1. LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Deep learning." Nature 521.7553 (2015): 436-444. (Three Giants' Survey)
  2.  Representation Learning: A Review and New Perspectives, Yoshua Bengio, Aaron Courville, Pascal Vincent, Arxiv, 2012.

Tutorial

  1. UFLDL Tutorial 1
  2. UFLDL Tutorial 2
  3. Deep Learning for NLP (without Magic)
  4. A Deep Learning Tutorial: From Perceptrons to Deep Networks
  5. Deep Learning from the Bottom up
  6. Theano Tutorial
  7. Neural Networks for Matlab
  8. Using convolutional neural nets to detect facial keypoints tutorial
  9. Pytorch Tutorials
  10. The Best Machine Learning Tutorials On The Web
  11. VGG Convolutional Neural Networks Practical
  12. TensorFlow tutorials
  13. More TensorFlow tutorials
  14. TensorFlow Python Notebooks
  15. Keras and Lasagne Deep Learning Tutorials
  16. Classification on raw time series in TensorFlow with a LSTM RNN
  17. Using convolutional neural nets to detect facial keypoints tutorial
  18. TensorFlow-World
  19. Deep Learning NIPS’2015 Tutorial Geoff Hinton, Yoshua Bengio & Yann LeCun 深度学习三巨头共同主持

视频教程

Courses

  1. Machine Learning - Stanford
  2. Machine Learning - Caltech
  3. Machine Learning - Carnegie Mellon
  4. Neural Networks for Machine Learning
  5. Neural networks class
  6. Deep Learning Course
  7. A.I - Berkeley
  8. A.I - MIT
  9. Vision and learning - computers and brains
  10. Convolutional Neural Networks for Visual Recognition - Stanford
  11. Convolutional Neural Networks for Visual Recognition - Stanford
  12. Deep Learning for Natural Language Processing - Stanford
  13. Neural Networks - usherbrooke
  14. Machine Learning - Oxford
  15. Deep Learning - Nvidia
  16. Graduate Summer School: Deep Learning, Feature Learning
  17. Deep Learning - Udacity/Google
  18. Deep Learning - UWaterloo
  19. Statistical Machine Learning - CMU
  20. Deep Learning Course
  21. Bay area DL school
    • [http://www.bayareadlschool.org/] by Andrew Ng, Yoshua Bengio, Samy Bengio, Andrej Karpathy, Richard Socher, Hugo Larochelle and many others @ Stanford, CA (2016)
  22. Designing, Visualizing and Understanding Deep Neural Networks-UC Berkeley
  23. UVA Deep Learning Course
  24. MIT 6.S094: Deep Learning for Self-Driving Cars
  25. MIT 6.S191: Introduction to Deep Learning
  26. Berkeley CS 294: Deep Reinforcement Learning
  27. [Keras in Motion video course
  28. Practical Deep Learning For Coders

Videos and Lectures

  1. How To Create A Mind
  2. Deep Learning, Self-Taught Learning and Unsupervised Feature Learning
  3. Recent Developments in Deep Learning
  4. The Unreasonable Effectiveness of Deep Learning
  5. Deep Learning of Representations
  6. Principles of Hierarchical Temporal Memory
  7. Machine Learning Discussion Group - Deep Learning w/ Stanford AI Lab
  8. Making Sense of the World with Deep Learning
  9. Demystifying Unsupervised Feature Learning
  10. Visual Perception with Deep Learning
  11. The Next Generation of Neural Networks
  12. The wonderful and terrifying implications of computers that can learn
  13. Unsupervised Deep Learning - Stanford
  14. Natural Language Processing
  15. A beginners Guide to Deep Neural Networks
  16. Deep Learning: Intelligence from Big Data
  17. Introduction to Artificial Neural Networks and Deep Learning
  18. NIPS 2016 lecture and workshop videos

代码

  1. Caffe
  2. Torch7
  3. Theano
  4. cuda-convnet
  5. convetjs
  6. Ccv
  7. NuPIC -[http://numenta.org/nupic.html]
  8. DeepLearning4J
  9. Brain
  10. DeepLearnToolbox
  11. Deepnet
  12. Deeppy -[https://github.com/andersbll/deeppy]
  13. JavaNN
  14. hebel
  15. Mocha.jl
  16. OpenDL
  17. cuDNN
  18. MGL
  19. Knet.jl
  20. Nvidia DIGITS - a web app based on Caffe
  21. Neon - Python based Deep Learning Framework
  22. Keras - Theano based Deep Learning Library
  23. Chainer - A flexible framework of neural networks for deep learning
  24. RNNLM Toolkit
  25. RNNLIB - A recurrent neural network library
  26. char-rnn
  27. MatConvNet: CNNs for MATLAB
  28. Minerva - a fast and flexible tool for deep learning on multi-GPU
  29. Brainstorm - Fast, flexible and fun neural networks.
  30. Tensorflow - Open source software library for numerical computation using data flow graphs
  31. DMTK - Microsoft Distributed Machine Learning Tookit
  32. Scikit Flow - Simplified interface for TensorFlow [mimicking Scikit Learn]
  33. MXnet - Lightweight, Portable, Flexible Distributed/Mobile Deep Learning framework
  34. Veles - Samsung Distributed machine learning platform
  35. Marvin - A Minimalist GPU-only N-Dimensional ConvNets Framework
  36. [https://github.com/PrincetonVision/marvin]
  37. Apache SINGA - A General Distributed Deep Learning Platform
  38. DSSTNE - Amazon's library for building Deep Learning models
  39. SyntaxNet - Google's syntactic parser - A TensorFlow dependency library
  40. mlpack - A scalable Machine Learning library
  41. Torchnet - Torch based Deep Learning Library
  42. Paddle - PArallel Distributed Deep LEarning by Baidu
  43. NeuPy - Theano based Python library for ANN and Deep Learning
  44. Lasagne - a lightweight library to build and train neural networks in Theano
  45. nolearn - wrappers and abstractions around existing neural network libraries, most notably Lasagne
  46. [https://github.com/dnouri/nolearn]
  47. Sonnet - a library for constructing neural networks by Google's DeepMind
  48. PyTorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
  49. CNTK - Microsoft Cognitive Toolkit

  50. Aaron Courville [http://aaroncourville.wordpress.com]

  51. Abdel-rahman Mohamed [http://www.cs.toronto.edu/~asamir/]

  52. Adam Coates [http://cs.stanford.edu/~acoates/]

  53. Alex Acero [http://research.microsoft.com/en-us/people/alexac/]

  54. Alex Krizhevsky [http://www.cs.utoronto.ca/~kriz/index.html]

  55. Alexander Ilin [http://users.ics.aalto.fi/alexilin/]

  56. Amos Storkey [http://homepages.inf.ed.ac.uk/amos/]

  57. Andrej Karpathy [http://cs.stanford.edu/~karpathy/]

  58. Andrew M. Saxe [http://www.stanford.edu/~asaxe/]

  59. Andrew Ng [http://www.cs.stanford.edu/people/ang/]

  60. Andrew W. Senior [http://research.google.com/pubs/author37792.html]

  61. Andriy Mnih [http://www.gatsby.ucl.ac.uk/~amnih/]

  62. Ayse Naz Erkan [http://www.cs.nyu.edu/~naz/]

  63. Benjamin Schrauwen [http://reslab.elis.ugent.be/benjamin]

  64. Bernardete Ribeiro [https://www.cisuc.uc.pt/people/show/2020]

  65. Bo David Chen [http://vision.caltech.edu/~bchen3/Site/Bo_David_Chen.html]

  66. Boureau Y-Lan [http://cs.nyu.edu/~ylan/]

  67. Brian Kingsbury [http://researcher.watson.ibm.com/researcher/view.php?person=us-bedk]

  68. Christopher Manning [http://nlp.stanford.edu/~manning/]

  69. Clement Farabet [http://www.clement.farabet.net/]

  70. Dan Claudiu Cireșan [http://www.idsia.ch/~ciresan/]

  71. David Reichert [http://serre-lab.clps.brown.edu/person/david-reichert/]

  72. Derek Rose [http://mil.engr.utk.edu/nmil/member/5.html]

  73. Dong Yu [http://research.microsoft.com/en-us/people/dongyu/default.aspx]

  74. Drausin Wulsin [http://www.seas.upenn.edu/~wulsin/]

  75. Erik M. Schmidt [http://music.ece.drexel.edu/people/eschmidt]

  76. Eugenio Culurciello [https://engineering.purdue.edu/BME/People/viewPersonById?resource_id=71333]

  77. Frank Seide [http://research.microsoft.com/en-us/people/fseide/]

  78. Galen Andrew [http://homes.cs.washington.edu/~galen/]

  79. Geoffrey Hinton [http://www.cs.toronto.edu/~hinton/]

  80. George Dahl [http://www.cs.toronto.edu/~gdahl/]

  81. Graham Taylor [http://www.uoguelph.ca/~gwtaylor/]

  82. Grégoire Montavon [http://gregoire.montavon.name/]

  83. Guido Francisco Montúfar [http://personal-homepages.mis.mpg.de/montufar/]

  84. Guillaume Desjardins [http://brainlogging.wordpress.com/]

  85. Hannes Schulz [http://www.ais.uni-bonn.de/~schulz/]

  86. Hélène Paugam-Moisy [http://www.lri.fr/~hpaugam/]

  87. Honglak Lee [http://web.eecs.umich.edu/~honglak/]

  88. Hugo Larochelle [http://www.dmi.usherb.ca/~larocheh/index_en.html]

  89. Ilya Sutskever [http://www.cs.toronto.edu/~ilya/]

  90. Itamar Arel [http://mil.engr.utk.edu/nmil/member/2.html]

  91. James Martens [http://www.cs.toronto.edu/~jmartens/]

  92. Jason Morton [http://www.jasonmorton.com/]

  93. Jason Weston [http://www.thespermwhale.com/jaseweston/]

  94. Jeff Dean [http://research.google.com/pubs/jeff.html]

  95. Jiquan Mgiam [http://cs.stanford.edu/~jngiam/]

  96. Joseph Turian [http://www-etud.iro.umontreal.ca/~turian/]

  97. Joshua Matthew Susskind [http://aclab.ca/users/josh/index.html]

  98. Jürgen Schmidhuber [http://www.idsia.ch/~juergen/]

  99. Justin A. Blanco [https://sites.google.com/site/blancousna/]

  100. Koray Kavukcuoglu [http://koray.kavukcuoglu.org/]

  101. KyungHyun Cho [http://users.ics.aalto.fi/kcho/]

  102. Li Deng [http://research.microsoft.com/en-us/people/deng/]

  103. Lucas Theis [http://www.kyb.tuebingen.mpg.de/nc/employee/details/lucas.html]

  104. Ludovic Arnold [http://ludovicarnold.altervista.org/home/]

  105. Marc'Aurelio Ranzato [http://www.cs.nyu.edu/~ranzato/]

  106. Martin Längkvist [http://aass.oru.se/~mlt/]

  107. Misha Denil [http://mdenil.com/]

  108. Mohammad Norouzi [http://www.cs.toronto.edu/~norouzi/]

  109. Nando de Freitas [http://www.cs.ubc.ca/~nando/]

  110. Navdeep Jaitly [http://www.cs.utoronto.ca/~ndjaitly/]

  111. Nicolas Le Roux [http://nicolas.le-roux.name/]

  112. Nitish Srivastava [http://www.cs.toronto.edu/~nitish/]

  113. Noel Lopes [https://www.cisuc.uc.pt/people/show/2028]

  114. Oriol Vinyals [http://www.cs.berkeley.edu/~vinyals/]

  115. Pascal Vincent [http://www.iro.umontreal.ca/~vincentp]

  116. Patrick Nguyen [https://sites.google.com/site/drpngx/]

  117. Pedro Domingos [http://homes.cs.washington.edu/~pedrod/]

  118. Peggy Series [http://homepages.inf.ed.ac.uk/pseries/]

  119. Pierre Sermanet [http://cs.nyu.edu/~sermanet]

  120. Piotr Mirowski [http://www.cs.nyu.edu/~mirowski/]

  121. Quoc V. Le [http://ai.stanford.edu/~quocle/]

  122. Reinhold Scherer [http://bci.tugraz.at/scherer/]

  123. Richard Socher [http://www.socher.org/]

  124. Rob Fergus [http://cs.nyu.edu/~fergus/pmwiki/pmwiki.php]

  125. Robert Coop [http://mil.engr.utk.edu/nmil/member/19.html]

  126. Robert Gens [http://homes.cs.washington.edu/~rcg/]

  127. Roger Grosse [http://people.csail.mit.edu/rgrosse/]

  128. Ronan Collobert [http://ronan.collobert.com/]

  129. Ruslan Salakhutdinov [http://www.utstat.toronto.edu/~rsalakhu/]

  130. Sebastian Gerwinn [http://www.kyb.tuebingen.mpg.de/nc/employee/details/sgerwinn.html]

  131. Stéphane Mallat [http://www.cmap.polytechnique.fr/~mallat/]

  132. Sven Behnke [http://www.ais.uni-bonn.de/behnke/]

  133. Tapani Raiko [http://users.ics.aalto.fi/praiko/]

  134. Tara Sainath [https://sites.google.com/site/tsainath/]

  135. Tijmen Tieleman [http://www.cs.toronto.edu/~tijmen/]

  136. Tom Karnowski [http://mil.engr.utk.edu/nmil/member/36.html]

  137. Tomáš Mikolov [https://research.facebook.com/tomas-mikolov]

  138. Ueli Meier [http://www.idsia.ch/~meier/]

  139. Vincent Vanhoucke [http://vincent.vanhoucke.com]

  140. Volodymyr Mnih [http://www.cs.toronto.edu/~vmnih/]

  141. Yann LeCun [http://yann.lecun.com/]

  142. Yichuan Tang [http://www.cs.toronto.edu/~tang/]

  143. Yoshua Bengio [http://www.iro.umontreal.ca/~bengioy/yoshua_en/index.html]

  144. Yotaro Kubo [http://yota.ro/]

  145. Youzhi [Will] Zou [http://ai.stanford.edu/~wzou]

  146. Fei-Fei Li [http://vision.stanford.edu/feifeili]

  147. Ian Goodfellow [https://research.google.com/pubs/105214.html]

  148. Robert Laganière [http://www.site.uottawa.ca/~laganier/]

重要网站收藏

  1. deeplearning.net [http://deeplearning.net/]
  2. deeplearning.stanford.edu [http://deeplearning.stanford.edu/]
  3. nlp.stanford.edu [http://nlp.stanford.edu/]
  4. ai-junkie.com [http://www.ai-junkie.com/ann/evolved/nnt1.html]
  5. cs.brown.edu/research/ai [http://cs.brown.edu/research/ai/]
  6. eecs.umich.edu/ai [http://www.eecs.umich.edu/ai/]
  7. cs.utexas.edu/users/ai-lab [http://www.cs.utexas.edu/users/ai-lab/]
  8. cs.washington.edu/research/ai [http://www.cs.washington.edu/research/ai/]
  9. aiai.ed.ac.uk [http://www.aiai.ed.ac.uk/]
  10. www-aig.jpl.nasa.gov [http://www-aig.jpl.nasa.gov/]
  11. csail.mit.edu [http://www.csail.mit.edu/]
  12. cgi.cse.unsw.edu.au/~aishare [http://cgi.cse.unsw.edu.au/~aishare/]
  13. cs.rochester.edu/research/ai [http://www.cs.rochester.edu/research/ai/]
  14. ai.sri.com [http://www.ai.sri.com/]
  15. isi.edu/AI/isd.htm [http://www.isi.edu/AI/isd.htm]
  16. nrl.navy.mil/itd/aic [http://www.nrl.navy.mil/itd/aic/]
  17. hips.seas.harvard.edu [http://hips.seas.harvard.edu/]
  18. AI Weekly [http://aiweekly.co]
  19. stat.ucla.edu [http://www.stat.ucla.edu/~junhua.mao/m-RNN.html]
  20. deeplearning.cs.toronto.edu [http://deeplearning.cs.toronto.edu/i2t]
  21. jeffdonahue.com/lrcn/ [http://jeffdonahue.com/lrcn/]
  22. visualqa.org [http://www.visualqa.org/]
  23. www.mpi-inf.mpg.de/departments/computer-vision... [https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/]
  24. Deep Learning News [http://news.startup.ml/]
  25. Machine Learning is Fun! Adam Geitgey's Blog [https://medium.com/@ageitgey/]

免费在线图书

  1. Deep Learning [http://www.iro.umontreal.ca/~bengioy/dlbook/] by Yoshua Bengio, Ian Goodfellow and Aaron Courville [05/07/2015] 中文版:[https://github.com/exacity/deeplearningbook-chinese]
  2. Neural Networks and Deep Learning [http://neuralnetworksanddeeplearning.com/] by Michael Nielsen [Dec 2014]
  3. Deep Learning [http://research.microsoft.com/pubs/209355/DeepLearning-NowPublishing-Vol7-SIG-039.pdf] by Microsoft Research [2013]
  4. Deep Learning Tutorial [http://deeplearning.net/tutorial/deeplearning.pdf] by LISA lab, University of Montreal [Jan 6 2015]
  5. neuraltalk [https://github.com/karpathy/neuraltalk] by Andrej Karpathy : numpy-based RNN/LSTM implementation
  6. An introduction to genetic algorithms [https://svn-d1.mpi-inf.mpg.de/AG1/MultiCoreLab/papers/ebook-fuzzy-mitchell-99.pdf]
  7. Artificial Intelligence: A Modern Approach [http://aima.cs.berkeley.edu/]
  8. Deep Learning in Neural Networks: An Overview [http://arxiv.org/pdf/1404.7828v4.pdf]

Datasets

  1. MNIST -[http://yann.lecun.com/exdb/mnist/] Handwritten digits
  2. Google House Numbers
  3. [http://ufldl.stanford.edu/housenumbers/] from street view
  4. CIFAR-10 and CIFAR-100
  5. [http://www.cs.toronto.edu/~kriz/cifar.html]
  6. IMAGENET
  7. [http://www.image-net.org/]
  8. Tiny Images
  9. [http://groups.csail.mit.edu/vision/TinyImages/] 80 Million tiny images6.
  10. Flickr Data
  11. [https://yahooresearch.tumblr.com/post/89783581601/one-hundred-million-creative-commons-flickr-images] 100 Million Yahoo dataset
  12. Berkeley Segmentation Dataset 500
  13. [http://www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/]
  14. UC Irvine Machine Learning Repository
  15. [http://archive.ics.uci.edu/ml/]
  16. Flickr 8k
  17. [http://nlp.cs.illinois.edu/HockenmaierGroup/Framing_Image_Description/KCCA.html]
  18. Flickr 30k
  19. [http://shannon.cs.illinois.edu/DenotationGraph/]
  20. Microsoft COCO
  21. [http://mscoco.org/home/]
  22. VQA
  23. [http://www.visualqa.org/]
  24. Image QA
  25. [http://www.cs.toronto.edu/~mren/imageqa/data/cocoqa/]
  26. AT&T Laboratories Cambridge face database
  27. [http://www.uk.research.att.com/facedatabase.html]
  28. AVHRR Pathfinder
  29. [http://xtreme.gsfc.nasa.gov]
  30. Air Freight
  31. [http://www.anc.ed.ac.uk/~amos/afreightdata.html] - The Air Freight data set is a ray-traced image sequence along with ground truth segmentation based on textural characteristics. [455 images + GT, each 160x120 pixels]. [Formats: PNG]
  32. Amsterdam Library of Object Images
  33. [http://www.science.uva.nl/~aloi/] - ALOI is a color image collection of one-thousand small objects, recorded for scientific purposes. In order to capture the sensory variation in object recordings, we systematically varied viewing angle, illumination angle, and illumination color for each object, and additionally captured wide-baseline stereo images. We recorded over a hundred images of each object, yielding a total of 110,250 images for the collection. [Formats: png]
  34. Annotated face, hand, cardiac & meat images
  35. [http://www.imm.dtu.dk/~aam/] - Most images & annotations are supplemented by various ASM/AAM analyses using the AAM-API. [Formats: bmp,asf]
  36. Image Analysis and Computer Graphics
  37. [http://www.imm.dtu.dk/image/]
  38. Brown University Stimuli
  39. [http://www.cog.brown.edu/~tarr/stimuli.html] - A variety of datasets including geons, objects, and "greebles". Good for testing recognition algorithms. [Formats: pict]
  40. CAVIAR video sequences of mall and public space behavior
  41. [http://homepages.inf.ed.ac.uk/rbf/CAVIARDATA1/] - 90K video frames in 90 sequences of various human activities, with XML ground truth of detection and behavior classification [Formats: MPEG2 & JPEG]
  42. Machine Vision Unit
  43. [http://www.ipab.inf.ed.ac.uk/mvu/]
  44. CCITT Fax standard images
  45. [http://www.cs.waikato.ac.nz/~singlis/ccitt.html] - 8 images [Formats: gif]
  46. CMU CIL's Stereo Data with Ground Truth[cil-ster.html] - 3 sets of 11 images, including color tiff images with spectroradiometry [Formats: gif, tiff]
  47. CMU PIE Database
  48. [http://www.ri.cmu.edu/projects/project_418.html] - A database of 41,368 face images of 68 people captured under 13 poses, 43 illuminations conditions, and with 4 different expressions.
  49. CMU VASC Image Database
  50. [http://www.ius.cs.cmu.edu/idb/] - Images, sequences, stereo pairs [thousands of images] [Formats: Sun Rasterimage]
  51. Caltech Image Database
  52. [http://www.vision.caltech.edu/html-files/archive.html] - about 20 images - mostly top-down views of small objects and toys. [Formats: GIF]
  53. Columbia-Utrecht Reflectance and Texture Database
  54. [http://www.cs.columbia.edu/CAVE/curet/] - Texture and reflectance measurements for over 60 samples of 3D texture, observed with over 200 different combinations of viewing and illumination directions. [Formats: bmp]
  55. Computational Colour Constancy Data
  56. [http://www.cs.sfu.ca/~colour/data/index.html] - A dataset oriented towards computational color constancy, but useful for computer vision in general. It includes synthetic data, camera sensor data, and over 700 images. [Formats: tiff]
  57. Computational Vision Lab
  58. [http://www.cs.sfu.ca/~colour/]
  59. Content-based image retrieval database
  60. [http://www.cs.washington.edu/research/imagedatabase/groundtruth/] - 11 sets of color images for testing algorithms for content-based retrieval. Most sets have a description file with names of objects in each image. [Formats: jpg]
  61. Efficient Content-based Retrieval Group [http://www.cs.washington.edu/research/imagedatabase/]
  62. Densely Sampled View Spheres [http://ls7-www.cs.uni-dortmund.de/~peters/pages/research/modeladaptsys/modeladaptsys_vba_rov.html] - Densely sampled view spheres - upper half of the view sphere of two toy objects with 2500 images each. [Formats: tiff]
  63. Computer Science VII [Graphical Systems] [http://ls7-www.cs.uni-dortmund.de/]
  64. Digital Embryos [https://web-beta.archive.org/web/20011216051535/vision.psych.umn.edu/www/kersten-lab/demos/digitalembryo.html] - Digital embryos are novel objects which may be used to develop and test object recognition systems. They have an organic appearance. [Formats: various formats are available on request]
  65. Univerity of Minnesota Vision Lab [http://vision.psych.umn.edu/www/kersten-lab/kersten-lab.html]
  66. El Salvador Atlas of Gastrointestinal VideoEndoscopy [http://www.gastrointestinalatlas.com] - Images and Videos of his-res of studies taken from Gastrointestinal Video endoscopy. [Formats: jpg, mpg, gif]
  67. FG-NET Facial Aging Database[http://sting.cycollege.ac.cy/~alanitis/fgnetaging/index.htm] - Database contains 1002 face images showing subjects at different ages. [Formats: jpg]
  68. FVC2000 Fingerprint Databases[http://bias.csr.unibo.it/fvc2000/] - FVC2000 is the First International Competition for Fingerprint Verification Algorithms. Four fingerprint databases constitute the FVC2000 benchmark [3520 fingerprints in all].
  69. Biometric Systems Lab [http://bias.csr.unibo.it/research/biolab] - University of Bologna
  70. Face and Gesture images and image sequences [http://www.fg-net.org] - Several image datasets of faces and gestures that are ground truth annotated for benchmarking
  71. German Fingerspelling Database [http://www-i6.informatik.rwth-aachen.de/~dreuw/database.html] - The database contains 35 gestures and consists of 1400 image sequences that contain gestures of 20 different persons recorded under non-uniform daylight lighting conditions. [Formats: mpg,jpg]
  72. Language Processing and Pattern Recognition [http://www-i6.informatik.rwth-aachen.de/]
  73. Groningen Natural Image Database [http://hlab.phys.rug.nl/archive.html] - 4000+ 1536x1024 [16 bit] calibrated outdoor images [Formats: homebrew]
  74. ICG Testhouse sequence [http://www.icg.tu-graz.ac.at/~schindler/Data] - 2 turntable sequences from ifferent viewing heights, 36 images each, resolution 1000x750, color [Formats: PPM]
  75. Institute of Computer Graphics and Vision [http://www.icg.tu-graz.ac.at]
  76. IEN Image Library [http://www.ien.it/is/vislib/] - 1000+ images, mostly outdoor sequences [Formats: raw, ppm]
  77. INRIA's Syntim images database [http://www-rocq.inria.fr/~tarel/syntim/images.html] - 15 color image of simple objects [Formats: gif]
  78. INRIA [http://www.inria.fr/]
  79. INRIA's Syntim stereo databases [http://www-rocq.inria.fr/~tarel/syntim/paires.html] - 34 calibrated color stereo pairs [Formats: gif]
  80. Image Analysis Laboratory [http://www.ece.ncsu.edu/imaging/Archives/ImageDataBase/index.html] - Images obtained from a variety of imaging modalities -- raw CFA images, range images and a host of "medical images". [Formats: homebrew]
  81. Image Analysis Laboratory [http://www.ece.ncsu.edu/imaging]
  82. Image Database [http://www.prip.tuwien.ac.at/prip/image.html] - An image database including some textures
  83. JAFFE Facial Expression Image Database [http://www.mis.atr.co.jp/~mlyons/jaffe.html] - The JAFFE database consists of 213 images of Japanese female subjects posing 6 basic facial expressions as well as a neutral pose. Ratings on emotion adjectives are also available, free of charge, for research purposes. [Formats: TIFF Grayscale images.]
  84. ATR Research, Kyoto, Japan[http://www.mic.atr.co.jp/]
  85. JISCT Stereo Evaluation [ftp://ftp.vislist.com/IMAGERY/JISCT/] - 44 image pairs. These data have been used in an evaluation of stereo analysis, as described in the April 1993 ARPA Image Understanding Workshop paper ``The JISCT Stereo Evaluation'' by R.C.Bolles, H.H.Baker, and M.J.Hannah, 263--274 [Formats: SSI]
  86. MIT Vision Texture [http://www-white.media.mit.edu/vismod/imagery/VisionTexture/vistex.html] - Image archive [100+ images] [Formats: ppm]
  87. MIT face images and more [ftp://whitechapel.media.mit.edu/pub/images] - hundreds of images [Formats: homebrew]
  88. Machine Vision [http://vision.cse.psu.edu/book/testbed/images/] - Images from the textbook by Jain, Kasturi, Schunck [20+ images] [Formats: GIF TIFF]
  89. Mammography Image Databases [http://marathon.csee.usf.edu/Mammography/Database.html] - 100 or more images of mammograms with ground truth. Additional images available by request, and links to several other mammography databases are provided. [Formats: homebrew]
  90. ftp://ftp.cps.msu.edu/pub/prip [ftp://ftp.cps.msu.edu/pub/prip] - many images [Formats: unknown]
  91. Middlebury Stereo Data Sets with Ground Truth[http://www.middlebury.edu/stereo/data.html] - Six multi-frame stereo data sets of scenes containing planar regions. Each data set contains 9 color images and subpixel-accuracy ground-truth data. [Formats: ppm]
  92. Middlebury Stereo Vision Research Page [http://www.middlebury.edu/stereo] - Middlebury College
  93. Modis Airborne simulator, Gallery and data set [http://ltpwww.gsfc.nasa.gov/MODIS/MAS/] - High Altitude Imagery from around the world for environmental modeling in support of NASA EOS program [Formats: JPG and HDF]
  94. NIST Fingerprint and handwriting [ftp://sequoyah.ncsl.nist.gov/pub/databases/data] - datasets - thousands of images [Formats: unknown]
  95. NIST Fingerprint data [ftp://ftp.cs.columbia.edu/jpeg/other/uuencoded] - compressed multipart uuencoded tar file
  96. NLM HyperDoc Visible Human Project [http://www.nlm.nih.gov/research/visible/visible_human.html] - Color, CAT and MRI image samples - over 30 images [Formats: jpeg]
  97. National Design Repository [http://www.designrepository.org] - Over 55,000 3D CAD and solid models of [mostly] mechanical/machined engineerign designs. [Formats: gif,vrml,wrl,stp,sat]
  98. Geometric & Intelligent Computing Laboratory [http://gicl.mcs.drexel.edu]
  99. OSU [MSU] 3D Object Model Database [http://eewww.eng.ohio-state.edu/~flynn/3DDB/Models/] - several sets of 3D object models collected over several years to use in object recognition research [Formats: homebrew, vrml]
  100. OSU [MSU/WSU] Range Image Database [http://eewww.eng.ohio-state.edu/~flynn/3DDB/RID/] - Hundreds of real and synthetic images [Formats: gif, homebrew]
  101. OSU/SAMPL Database: Range Images, 3D Models, Stills, Motion Sequences [http://sampl.eng.ohio-state.edu/~sampl/database.htm] - Over 1000 range images, 3D object models, still images and motion sequences [Formats: gif, ppm, vrml, homebrew]
  102. Signal Analysis and Machine Perception Laboratory [http://sampl.eng.ohio-state.edu]
  103. Otago Optical Flow Evaluation Sequences [http://www.cs.otago.ac.nz/research/vision/Research/OpticalFlow/opticalflow.html] - Synthetic and real sequences with machine-readable ground truth optical flow fields, plus tools to generate ground truth for new sequences. [Formats: ppm,tif,homebrew]
  104. Vision Research Group [http://www.cs.otago.ac.nz/research/vision/index.html]
  105. ftp://ftp.limsi.fr/pub/quenot/opflow/testdata/piv/ [ftp://ftp.limsi.fr/pub/quenot/opflow/testdata/piv/] - Real and synthetic image sequences used for testing a Particle Image Velocimetry application. These images may be used for the test of optical flow and image matching algorithms. [Formats: pgm [raw]]
  106. LIMSI-CNRS/CHM/IMM/vision [http://www.limsi.fr/Recherche/IMM/PageIMM.html]
  107. LIMSI-CNRS [http://www.limsi.fr/]
  108. Photometric 3D Surface Texture Database [http://www.taurusstudio.net/research/pmtexdb/index.htm] - This is the first 3D texture database which provides both full real surface rotations and registered photometric stereo data [30 textures, 1680 images]. [Formats: TIFF]
  109. SEQUENCES FOR OPTICAL FLOW ANALYSIS [SOFA] [http://www.cee.hw.ac.uk/~mtc/sofa] - 9 synthetic sequences designed for testing motion analysis applications, including full ground truth of motion and camera parameters. [Formats: gif]
  110. Computer Vision Group [http://www.cee.hw.ac.uk/~mtc/research.html]
  111. Sequences for Flow Based Reconstruction[http://www.nada.kth.se/~zucch/CAMERA/PUB/seq.html] - synthetic sequence for testing structure from motion algorithms [Formats: pgm]
  112. Stereo Images with Ground Truth Disparity and Occlusion [http://www-dbv.cs.uni-bonn.de/stereo_data/] - a small set of synthetic images of a hallway with varying amounts of noise added. Use these images to benchmark your stereo algorithm. [Formats: raw, viff [khoros], or tiff]
  113. Stuttgart Range Image Database [http://range.informatik.uni-stuttgart.de] - A collection of synthetic range images taken from high-resolution polygonal models available on the web [Formats: homebrew]
  114. Department Image Understanding [http://www.informatik.uni-stuttgart.de/ipvr/bv/bv_home_engl.html]
  115. The AR Face Database [http://www2.ece.ohio-state.edu/~aleix/ARdatabase.html] - Contains over 4,000 color images corresponding to 126 people's faces [70 men and 56 women]. Frontal views with variations in facial expressions, illumination, and occlusions. [Formats: RAW [RGB 24-bit]]
  116. Purdue Robot Vision Lab [http://rvl.www.ecn.purdue.edu/RVL/]
  117. The MIT-CSAIL Database of Objects and Scenes [http://web.mit.edu/torralba/www/database.html] - Database for testing multiclass object detection and scene recognition algorithms. Over 72,000 images with 2873 annotated frames. More than 50 annotated object classes. [Formats: jpg]
  118. The RVL SPEC-DB [SPECularity DataBase] [http://rvl1.ecn.purdue.edu/RVL/specularity_database/] - A collection of over 300 real images of 100 objects taken under three different illuminaiton conditions [Diffuse/Ambient/Directed]. -- Use these images to test algorithms for detecting and compensating specular highlights in color images. [Formats: TIFF ]
  119. Robot Vision Laboratory[http://rvl1.ecn.purdue.edu/RVL/]
  120. The Xm2vts database [http://xm2vtsdb.ee.surrey.ac.uk] - The XM2VTSDB contains four digital recordings of 295 people taken over a period of four months. This database contains both image and video data of faces.
  121. Centre for Vision, Speech and Signal Processing [http://www.ee.surrey.ac.uk/Research/CVSSP]
  122. Traffic Image Sequences and 'Marbled Block' Sequence [http://i21www.ira.uka.de/image_sequences] - thousands of frames of digitized traffic image sequences as well as the 'Marbled Block' sequence [grayscale images] [Formats: GIF]
  123. IAKS/KOGS [http://i21www.ira.uka.de]
  124. U Bern Face images [ftp://ftp.iam.unibe.ch/pub/Images/FaceImages] - hundreds of images [Formats: Sun rasterfile]
  125. U Michigan textures [ftp://freebie.engin.umich.edu/pub/misc/textures] [Formats: compressed raw]
  126. U Oulu wood and knots database [http://www.ee.oulu.fi/~olli/Projects/Lumber.Grading.html] - Includes classifications - 1000+ color images [Formats: ppm]
  127. UCID - an Uncompressed Colour Image Database [http://vision.doc.ntu.ac.uk/datasets/UCID/ucid.html] - a benchmark database for image retrieval with predefined ground truth. [Formats: tiff]
  128. UMass Vision Image Archive [http://vis-www.cs.umass.edu/~vislib/] - Large image database with aerial, space, stereo, medical images and more. [Formats: homebrew]
  129. UNC's 3D image database [ftp://sunsite.unc.edu/pub/academic/computer-science/virtual-reality/3d] - many images [Formats: GIF]
  130. USF Range Image Data with Segmentation Ground Truth [http://marathon.csee.usf.edu/range/seg-comp/SegComp.html] - 80 image sets [Formats: Sun rasterimage]
  131. University of Oulu Physics-based Face Database [http://www.ee.oulu.fi/research/imag/color/pbfd.html] - contains color images of faces under different illuminants and camera calibration conditions as well as skin spectral reflectance measurements of each person.
  132. Machine Vision and Media Processing Unit[http://www.ee.oulu.fi/mvmp/]
  133. University of Oulu Texture Database [http://www.outex.oulu.fi] - Database of 320 surface textures, each captured under three illuminants, six spatial resolutions and nine rotation angles. A set of test suites is also provided so that texture segmentation, classification, and retrieval algorithms can be tested in a standard manner. [Formats: bmp, ras, xv]
  134. Machine Vision Group [http://www.ee.oulu.fi/mvg]
  135. Usenix face database [ftp://ftp.uu.net/published/usenix/faces] - Thousands of face images from many different sites [circa 994]
  136. View Sphere Database [http://www-prima.inrialpes.fr/Prima/hall/view_sphere.html] - Images of 8 objects seen from many different view points. The view sphere is sampled using a geodesic with 172 images/sphere. Two sets for training and testing are available. [Formats: ppm]
  137. PRIMA, GRAVIR [http://www-prima.inrialpes.fr/Prima/]
  138. Vision-list Imagery Archive [ftp://ftp.vislist.com/IMAGERY/] - Many images, many formats
  139. Wiry Object Recognition Database [http://www.cs.cmu.edu/~owenc/word.htm] - Thousands of images of a cart, ladder, stool, bicycle, chairs, and cluttered scenes with ground truth labelings of edges and regions. [Formats: jpg]
  140. 3D Vision Group [http://www.cs.cmu.edu/0.000000E+003dvision/]
  141. Yale Face Database [http://cvc.yale.edu/projects/yalefaces/yalefaces.html] - 165 images [15 individuals] with different lighting, expression, and occlusion configurations.
  142. Yale Face Database B [http://cvc.yale.edu/projects/yalefacesB/yalefacesB.html] - 5760 single light source images of 10 subjects each seen under 576 viewing conditions [9 poses x 64 illumination conditions]. [Formats: PGM]
  143. Center for Computational Vision and Control [http://cvc.yale.edu/]
  144. DeepMind QA Corpus
  145. YouTube-8M Dataset
    • [https://research.google.com/youtube8m/] - YouTube-8M is a large-scale labeled video dataset that consists of 8 million YouTube video IDs and associated labels from a diverse vocabulary of 4800 visual entities.
  146. Open Images dataset

汇总不全面,欢迎补全和提建议,敬请关注http://www.zhuanzhi.ai 和关注专知公众号,获取第一手AI相关知识

展开全文
Top