RNN:循环神经网络,是深度学习的一种模型。

VIP内容

【导读】 本教程是Garrett Hoffman使用Python和tensorflow的实时实例以及stock Twits数据,为读者朋友介绍用于自然语言处理任务的深度学习方法,其中包括word2vec、RNN和其相关变体(LSTM、GRU),以及卷积神经网络。通过对本教程的学习,您将会了解这些模型的原理,并简要回顾每种方法中所运用的数学理论。

教程大纲:

使用word2vec词嵌入的文本表示学习

  • CBOW与skip-gram
  • 如何训练自定义的词嵌入
  • 如何使用预训练词嵌入

传统RNN

  • RNN变体比传统RNN具有更好性能的原因
  • RNN变体
  • 模型可以提高准确性的原因

卷积神经网络(CNN)

  • CNN为什么会被用于语言模型
  • CNN相对于RNN的优势
  • 如何使用RNN来学习文本合成的生成模型以及该方法的应用

本教程将理论与实践相结合,为了加强对理论部分的理解,作者分享了相关代码来实现模型,code地址:https://github.com/GarrettHoffman/AI_Conf_2019_DL_4_NLP

成为VIP会员查看完整内容
Deep learning methods for natural language processing Presentation 1.pdf
28+
0+
更多VIP内容

最新论文

Deep Recurrent Neural Networks (RNN) is increasingly used in decision-making with temporal sequences. However, understanding how RNN models produce final predictions remains a major challenge. Existing work on interpreting RNN models for sequence predictions often focuses on explaining predictions for individual data instances (e.g., patients or students). Because state-of-the-art predictive models are formed with millions of parameters optimized over millions of instances, explaining predictions for single data instances can easily miss a bigger picture. Besides, many outperforming RNN models use multi-hot encoding to represent the presence/absence of features, where the interpretability of feature value attribution is missing. We present ViSFA, an interactive system that visually summarizes feature attribution over time for different feature values. ViSFA scales to large data such as the MIMIC dataset containing the electronic health records of 1.2 million high-dimensional temporal events. We demonstrate that ViSFA can help us reason RNN prediction and uncover insights from data by distilling complex attribution into compact and easy-to-interpret visualizations.

0+
0+
下载
预览
更多最新论文
Top