VIP内容

当地时间2月26日到27日,斯坦福大学和 Matroid 公司举办的第五届Scaled ML2020成功举办。来自微软、Google、Facebook、伯克利、斯坦福等众多顶级机器学习系统专家汇聚一堂,进行了关于当下流行的TensorFlow、Pytorch等计算框架的报告,非常值得关注。

TensorFlow、Kubernetes、Apache Spark、Tesla Autopilot、Keras、Horovod、Allen AI、Apache Arrow、MLPerf、OpenAI、Matroid等的创建者将在各种计算平台(如gpu、cpu、FPGAs、TPUs和新生的AI芯片行业)上领导关于运行和扩展机器学习算法与系统设计的讨论。

地址:

http://scaledml.org/2020/

会议旨在让在各种不同计算平台上运行机器学习算法的研究人员汇聚一堂,彼此交流,并鼓励算法设计人员互相帮助,在平台之间扩展、移植、交流不同想法。

成为VIP会员查看完整内容
0
25

最新内容

In this work we explore recurrent representations of leaky integrate and fire neurons operating at a timescale equal to their absolute refractory period. Our coarse time scale approximation is obtained using a probability distribution function for spike arrivals that is homogeneously distributed over this time interval. This leads to a discrete representation that exhibits the same dynamics as the continuous model, enabling efficient large scale simulations and backpropagation through the recurrent implementation. We use this approach to explore the training of deep spiking neural networks including convolutional, all-to-all connectivity, and maxpool layers directly in Pytorch. We found that the recurrent model leads to high classification accuracy using just 4-long spike trains during training. We also observed a good transfer back to continuous implementations of leaky integrate and fire neurons. Finally, we applied this approach to some of the standard control problems as a first step to explore reinforcement learning using neuromorphic chips.

0
0
下载
预览

最新论文

In this work we explore recurrent representations of leaky integrate and fire neurons operating at a timescale equal to their absolute refractory period. Our coarse time scale approximation is obtained using a probability distribution function for spike arrivals that is homogeneously distributed over this time interval. This leads to a discrete representation that exhibits the same dynamics as the continuous model, enabling efficient large scale simulations and backpropagation through the recurrent implementation. We use this approach to explore the training of deep spiking neural networks including convolutional, all-to-all connectivity, and maxpool layers directly in Pytorch. We found that the recurrent model leads to high classification accuracy using just 4-long spike trains during training. We also observed a good transfer back to continuous implementations of leaky integrate and fire neurons. Finally, we applied this approach to some of the standard control problems as a first step to explore reinforcement learning using neuromorphic chips.

0
0
下载
预览
Top