ICLR2019最佳论文出炉

2019 年 5 月 6 日 专知
ICLR2019最佳论文出炉
导读

ICLR2019国际会议于5月6-9日在新奥尔良举行,今日ICLR官网发布了2019年会议最佳论文,分别来自蒙特利尔大学、微软研究院和MIT的两篇文章获奖。



Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks

Yikang Shen · Shawn Tan · Alessandro Sordoni · Aaron Courville


Abstract: Natural language is hierarchically structured: smaller units (e.g., phrases) are nested within larger units (e.g., clauses). When a larger constituent ends, all of the smaller constituents that are nested within it must also be closed. While the standard LSTM architecture allows different neurons to track information at different time scales, it does not have an explicit bias towards modeling a hierarchy of constituents. This paper proposes to add such inductive bias by ordering the neurons; a vector of master input and forget gates ensures that when a given neuron is updated, all the neurons that follow it in the ordering are also updated. Our novel recurrent architecture, ordered neurons LSTM (ON-LSTM), achieves good performance on four different tasks: language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference.

Keywords: Deep Learning, Natural Language Processing, Recurrent Neural Networks, Language Modeling

TL;DR: We introduce a new inductive bias that integrates tree structures in recurrent neural networks.

论文链接: 

https://openreview.net/pdf?id=B1l6qiR5F7


The Lottery Ticket Hypothesis:  Finding Sparse, Trainable Neural Networks

Jonathan Frankle · Michael Carbin


Abstract: Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.

Keywords: Neural networks, sparsity, pruning, compression, performance, architecture search

TL;DR: Feedforward neural networks that can have weights pruned after training could have had the same weights pruned before training

论文链接: 

https://openreview.net/pdf?id=rJl-b3RcF7


-END-

专 · 知

专知,专业可信的人工智能知识分发,让认知协作更快更好!欢迎登录www.zhuanzhi.ai,注册登录专知,获取更多AI知识资料!

欢迎微信扫一扫加入专知人工智能知识星球群,获取最新AI专业干货知识教程视频资料和与专家交流咨询!

请加专知小助手微信(扫一扫如下二维码添加),加入专知人工智能主题群,咨询技术商务合作~

专知《深度学习:算法到实战》课程全部完成!530+位同学在学习,现在报名,限时优惠!网易云课堂人工智能畅销榜首位!

点击“阅读原文”,了解报名专知《深度学习:算法到实战》课程

登录查看更多
11

相关内容

ICLR,全称为「International Conference on Learning Representations」(国际学习表征会议),2013 年才刚刚成立了第一届。这个一年一度的会议虽然今年才办到第五届,但已经被学术研究者们广泛认可,被认为「深度学习的顶级会议」。 ICLR由位列深度学习三大巨头之二的 Yoshua Bengio 和 Yann LeCun 牵头创办。 ICLR 希望能为深度学习提供一个专业化的交流平台。但实际上 ICLR 不同于其它国际会议,得到好评的真正原因,并不只是他们二位所自带的名人光环,而在于它推行的 Open Review 评审制度。

​据陈怡然老师微博,ICML 2020官方今日发布接收论文,共有4990篇论文投稿,共有1088篇接受,接受率21.8%。

ICML是 International Conference on Machine Learning的缩写,即国际机器学习大会。今年第37届ICML原定于2020年7月12-18日在奥地利维也纳举行。

ICML官方发布了一篇“组织者的来信”,表示受COVID-19影响,无法预测7月份的情况,决定ICML 2020将完全以虚拟方式进行。

一些接受论文抢先看:

Sparse Sinkhorn Attention https://arxiv.org/abs/2002.11296 Random Matrix Theory Proves that Deep Learning Representations of GAN-data Behave as Gaussian Mixtures, https://arxiv.org/abs/2001.08370 GradientDICE: Rethinking Generalized Offline Estimation of Stationary Values https://arxiv.org/abs/2001.11113 Deep k-NN for Noisy Labels https://arxiv.org/abs/2004.12289 Likelihood-free MCMC with Amortized Approximate Ratio Estimators https://arxiv.org/abs/1903.04057 Revisiting Spatial Invariance with Low-Rank Local Connectivity https://arxiv.org/abs/2002.02959

成为VIP会员查看完整内容
0
47

Recurrent neural network (RNN) models are widely used for processing sequential data governed by a latent tree structure. Previous work shows that RNN models (especially Long Short-Term Memory (LSTM) based models) could learn to exploit the underlying tree structure. However, its performance consistently lags behind that of tree-based models. This work proposes a new inductive bias Ordered Neurons, which enforces an order of updating frequencies between hidden state neurons. We show that the ordered neurons could explicitly integrate the latent tree structure into recurrent models. To this end, we propose a new RNN unit: ON-LSTM, which achieve good performances on four different tasks: language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference.

0
3
下载
预览

Attention mechanism has been used as an ancillary means to help RNN or CNN. However, the Transformer (Vaswani et al., 2017) recently recorded the state-of-the-art performance in machine translation with a dramatic reduction in training time by solely using attention. Motivated by the Transformer, Directional Self Attention Network (Shen et al., 2017), a fully attention-based sentence encoder, was proposed. It showed good performance with various data by using forward and backward directional information in a sentence. But in their study, not considered at all was the distance between words, an important feature when learning the local dependency to help understand the context of input text. We propose Distance-based Self-Attention Network, which considers the word distance by using a simple distance mask in order to model the local dependency without losing the ability of modeling global dependency which attention has inherent. Our model shows good performance with NLI data, and it records the new state-of-the-art result with SNLI data. Additionally, we show that our model has a strength in long sentences or documents.

0
10
下载
预览
小贴士
相关论文
Simon S. Du,Kangcheng Hou,Barnabás Póczos,Ruslan Salakhutdinov,Ruosong Wang,Keyulu Xu
8+阅读 · 2019年11月4日
Antreas Antoniou,Harrison Edwards,Amos Storkey
19+阅读 · 2019年3月5日
Universal Transformers
Mostafa Dehghani,Stephan Gouws,Oriol Vinyals,Jakob Uszkoreit,Łukasz Kaiser
4+阅读 · 2019年3月5日
Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks
Yikang Shen,Shawn Tan,Alessandro Sordoni,Aaron Courville
3+阅读 · 2018年11月21日
Keyulu Xu,Weihua Hu,Jure Leskovec,Stefanie Jegelka
18+阅读 · 2018年10月1日
Gongbo Tang,Matthias Muller,Annette Rios,Rico Sennrich
3+阅读 · 2018年8月27日
Maha Elbayad,Laurent Besacier,Jakob Verbeek
7+阅读 · 2018年5月14日
Wei Li,Xiatian Zhu,Shaogang Gong
6+阅读 · 2018年2月22日
Tao Shen,Tianyi Zhou,Guodong Long,Jing Jiang,Sen Wang,Chengqi Zhang
16+阅读 · 2018年1月31日
Jinbae Im,Sungzoon Cho
10+阅读 · 2017年12月6日
Top