NLP不同任务Tensorflow深度学习模型大全

3 月 19 日 专知

【导读】小编为大家推荐一个GitHub项目:NLP-Models-Tensorflow,聚焦自然语言处理问题相关机器学习和Tensorflow深度学习模型。


要分为以下几部分:

  • Text classification 文本分类

  • Chatbot 聊天机器人

  • Neural Machine Translation 机器翻译

  • Embedded 嵌入式表示

  • Entity-Tagging 命名实体识别

  • POS-Tagging   词性标注

  • Dependency-Parser 依存句法分析

  • Question-Answers  问答

  • Supervised Summarization 摘要

  • Unsupervised Summarization 摘要

  • Stemming 

  • Generator

  • Language detection

  • OCR (optical character recognition)

  • Speech to Text 

  • Text Similarity

  • Miscellaneous

  • Attention 注意力机制

GitHub链接:

https://github.com/huseinzol05/NLP-Models-Tensorflow

-END-

专 · 知

专知《深度学习:算法到实战》课程全部完成!510+位同学在学习,现在报名,限时优惠!网易云课堂人工智能畅销榜首位!

欢迎微信扫一扫加入专知人工智能知识星球群,获取最新AI专业干货知识教程视频资料和与专家交流咨询!

请加专知小助手微信(扫一扫如下二维码添加),加入专知人工智能主题群,咨询《深度学习:算法到实战》课程,咨询技术商务合作~

请PC登录www.zhuanzhi.ai或者点击阅读原文,注册登录专知,获取更多AI知识资料!

点击“阅读原文”,了解报名专知《深度学习:算法到实战》课程

登录查看更多
点赞 0

In this paper we propose a method to improve the accuracy of trajectory optimization for dynamic robots with intermittent contact by using orthogonal collocation. Until recently, most trajectory optimization methods for systems with contacts employ mode-scheduling, which requires an a priori knowledge of the contact order and thus cannot produce complex or non-intuitive behaviors. Contact-implicit trajectory optimization methods offer a solution to this by allowing the optimization to make or break contacts as needed, but thus far have suffered from poor accuracy. Here, we combine methods from direct collocation using higher order orthogonal polynomials with contact-implicit optimization to generate trajectories with significantly improved accuracy. The key insight is to increase the order of the polynomial representation while maintaining the assumption that impact occurs over the duration of one finite element.

点赞 0
阅读1+

In information retrieval (IR) and related tasks, term weighting approaches typically consider the frequency of the term in the document and in the collection in order to compute a score reflecting the importance of the term for the document. In tasks characterized by the presence of training data (such as text classification) it seems logical that the term weighting function should take into account the distribution (as estimated from training data) of the term across the classes of interest. Although `supervised term weighting' approaches that use this intuition have been described before, they have failed to show consistent improvements. In this article we analyse the possible reasons for this failure, and call consolidated assumptions into question. Following this criticism we propose a novel supervised term weighting approach that, instead of relying on any predefined formula, learns a term weighting function optimised on the training set of interest; we dub this approach \emph{Learning to Weight} (LTW). The experiments that we run on several well-known benchmarks, and using different learning methods, show that our method outperforms previous term weighting approaches in text classification.

点赞 0
阅读3+

Most work in text classification and Natural Language Processing (NLP) focuses on English or a handful of other languages that have text corpora of hundreds of millions of words. This is creating a new version of the digital divide: the artificial intelligence (AI) divide. Transfer-based approaches, such as Cross-Lingual Text Classification (CLTC) - the task of categorizing texts written in different languages into a common taxonomy, are a promising solution to the emerging AI divide. Recent work on CLTC has focused on demonstrating the benefits of using bilingual word embeddings as features, relegating the CLTC problem to a mere benchmark based on a simple averaged perceptron. In this paper, we explore more extensively and systematically two flavors of the CLTC problem: news topic classification and textual churn intent detection (TCID) in social media. In particular, we test the hypothesis that embeddings with context are more effective, by multi-tasking the learning of multilingual word embeddings and text classification; we explore neural architectures for CLTC; and we move from bi- to multi-lingual word embeddings. For all architectures, types of word embeddings and datasets, we notice a consistent gain trend in favor of multilingual joint training, especially for low-resourced languages.

点赞 0
阅读1+

Corrado B\"ohm once observed that if $Y$ is any fixed point combinator (fpc), then $Y(\lambda yx.x(yx))$ is again fpc. He thus discovered the first "fpc generating scheme" -- a generic way to build new fpcs from old. Continuing this idea, define an \emph{fpc generator} to be any sequence of terms $G_1,\dots,G_n$ such that $$Y \text{ is fpc } \Longrightarrow YG_1\cdots G_n \text{ is fpc}$$ In this contribution, we take first steps in studying the structure of (weak) fpc generators. We isolate several classes of such generators, and examine elementary properties like injectivity and constancy. We provide sufficient conditions for existence of fixed points of a given generator $(G_1,..,G_n)$: an fpc $Y$ such that $Y = YG_1\cdots G_n$. We conjecture that weak constancy is a necessary condition for existence of such (higher-order) fixed points. This generalizes Statman's conjecture on the non-existence of ``double fpcs'': fixed points of the generator $(G) = (\lambda yx.x(yx))$ discovered by B\"ohm.

点赞 0
阅读1+
小贴士
Top