训练集,在AI领域多指用于机器学习训练的数据,数据可以有标签的,也可以是无标签的。

VIP内容

题目: ImageNet Classification with Deep Convolutional Neural Networks

摘要:

我们训练了一个大型的深度卷积神经网络,将LSVRC-2010 ImageNet训练集中的130万幅高分辨率图像分成1000个不同的类。在测试数据上,我们获得了前1名和前5名的错误率,分别为39.7%和18.9%,这比之前的最新结果要好得多。该神经网络有6000万个参数和50万个神经元,由5个卷积层组成,其中一些是最大池化层,还有两个全局连接层,最后是1000路的softmax。为了加快训练速度,我们使用了不饱和的神经元和一个非常高效的卷积网络GPU实现。为了减少全局连通层中的过拟合,我们采用了一种新的正则化方法,该方法被证明是非常有效的。

作者:

Ilya Sutskever是OpenAI的联合创始人和首席科学家,之前是斯坦福大学博士后,研究领域是机器学习,神经网络。

成为VIP会员查看完整内容
0
4

最新内容

Compared with the conventional hand-crafted approaches, the deep learning based methods have achieved tremendous performance improvements by training exquisitely crafted fancy networks over large-scale training sets. However, do we really need large-scale training set for salient object detection (SOD)? In this paper, we provide a deeper insight into the interrelationship between the SOD performances and the training sets. To alleviate the conventional demands for large-scale training data, we provide a feasible way to construct a novel small-scale training set, which only contains 4K images. Moreover, we propose a novel bi-stream network to take full advantage of our proposed small training set, which is consisted of two feature backbones with different structures, achieving complementary semantical saliency fusion via the proposed gate control unit. To our best knowledge, this is the first attempt to use a small-scale training set to outperform state-of-the-art models which are trained on large-scale training sets; nevertheless, our method can still achieve the leading state-of-the-art performance on five benchmark datasets.

0
0
下载
预览

最新论文

Compared with the conventional hand-crafted approaches, the deep learning based methods have achieved tremendous performance improvements by training exquisitely crafted fancy networks over large-scale training sets. However, do we really need large-scale training set for salient object detection (SOD)? In this paper, we provide a deeper insight into the interrelationship between the SOD performances and the training sets. To alleviate the conventional demands for large-scale training data, we provide a feasible way to construct a novel small-scale training set, which only contains 4K images. Moreover, we propose a novel bi-stream network to take full advantage of our proposed small training set, which is consisted of two feature backbones with different structures, achieving complementary semantical saliency fusion via the proposed gate control unit. To our best knowledge, this is the first attempt to use a small-scale training set to outperform state-of-the-art models which are trained on large-scale training sets; nevertheless, our method can still achieve the leading state-of-the-art performance on five benchmark datasets.

0
0
下载
预览
父主题
Top