Ordered Neurons: Integrating Tree Structures into Recurrent Neural Networks
Yikang Shen · Shawn Tan · Alessandro Sordoni · Aaron Courville
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Jonathan Frankle · Michael Carbin
ICML是 International Conference on Machine Learning的缩写，即国际机器学习大会。今年第37届ICML原定于2020年7月12-18日在奥地利维也纳举行。
Sparse Sinkhorn Attention https://arxiv.org/abs/2002.11296 Random Matrix Theory Proves that Deep Learning Representations of GAN-data Behave as Gaussian Mixtures， https://arxiv.org/abs/2001.08370 GradientDICE: Rethinking Generalized Offline Estimation of Stationary Values https://arxiv.org/abs/2001.11113 Deep k-NN for Noisy Labels https://arxiv.org/abs/2004.12289 Likelihood-free MCMC with Amortized Approximate Ratio Estimators https://arxiv.org/abs/1903.04057 Revisiting Spatial Invariance with Low-Rank Local Connectivity https://arxiv.org/abs/2002.02959
Recurrent neural network (RNN) models are widely used for processing sequential data governed by a latent tree structure. Previous work shows that RNN models (especially Long Short-Term Memory (LSTM) based models) could learn to exploit the underlying tree structure. However, its performance consistently lags behind that of tree-based models. This work proposes a new inductive bias Ordered Neurons, which enforces an order of updating frequencies between hidden state neurons. We show that the ordered neurons could explicitly integrate the latent tree structure into recurrent models. To this end, we propose a new RNN unit: ON-LSTM, which achieve good performances on four different tasks: language modeling, unsupervised parsing, targeted syntactic evaluation, and logical inference.
Attention mechanism has been used as an ancillary means to help RNN or CNN. However, the Transformer (Vaswani et al., 2017) recently recorded the state-of-the-art performance in machine translation with a dramatic reduction in training time by solely using attention. Motivated by the Transformer, Directional Self Attention Network (Shen et al., 2017), a fully attention-based sentence encoder, was proposed. It showed good performance with various data by using forward and backward directional information in a sentence. But in their study, not considered at all was the distance between words, an important feature when learning the local dependency to help understand the context of input text. We propose Distance-based Self-Attention Network, which considers the word distance by using a simple distance mask in order to model the local dependency without losing the ability of modeling global dependency which attention has inherent. Our model shows good performance with NLI data, and it records the new state-of-the-art result with SNLI data. Additionally, we show that our model has a strength in long sentences or documents.