语义理解(NLU)是通过一系列的AI算法,将文本解析为结构化的、机器可读的意图与词槽信息,便于互联网开发者更好的理解并满足用户需求。 思知AI机器人开放平台面向互联网开发者提供对自然语言文本的语义理解服务。

VIP内容

题目

不解析,生成!用于面向任务的语义分析的序列到序列体系结构,Don't Parse, Generate! A Sequence to Sequence Architecture for Task-Oriented Semantic Parsing

类型

自然语言语义解析

关键字

自然语言理解与生成,语义解析,智能搜索查询,智能语音助手,机器学习

简介

诸如Amazon Alexa,Apple Siri和GoogleAssistant之类的虚拟助手通常依靠语义解析组件来了解要执行哪些操作以使其用户说出一句话。传统上,基于规则或统计空位填充系统曾被用来解析“简单”查询;也就是说,包含单个动作的查询可以分解为一组不重叠的实体。最近,提出了移位减少解析器来处理更复杂的话语。这些方法虽然功能强大,但对可以解析的查询类型施加了特定的限制。在这项工作中,我们提出了一种基于顺序序列模型和指针生成器网络的统一体系结构,以处理简单查询和复杂查询。与其他作品不同,我们的方法不对语义剖析施加任何限制。此外,实验表明,它在三个公开可用的数据集(ATIS,SNIPS,Facebook TOP)上均达到了最先进的性能,与以前的系统相比,不精确匹配的准确性相对提高了3.3%至7.7%。最后,我们在两个内部数据集上展示了我们的方法的有效性。

作者

Subendhu Rongali∗,马萨诸塞大学阿默斯特分校

Luca Soldaini,亚马逊Alexa搜索

Wael Hamza,亚马逊Alexa

Emilio Monti,亚马逊Alexa AI

成为VIP会员查看完整内容
0
13

最新内容

A central question in natural language understanding (NLU) research is whether high performance demonstrates the models' strong reasoning capabilities. We present an extensive series of controlled experiments where pre-trained language models are exposed to data that have undergone specific corruption transformations. The transformations involve removing instances of specific word classes and often lead to non-sensical sentences. Our results show that performance remains high for most GLUE tasks when the models are fine-tuned or tested on corrupted data, suggesting that the models leverage other cues for prediction even in non-sensical contexts. Our proposed data transformations can be used as a diagnostic tool for assessing the extent to which a specific dataset constitutes a proper testbed for evaluating models' language understanding capabilities.

0
0
下载
预览

最新论文

A central question in natural language understanding (NLU) research is whether high performance demonstrates the models' strong reasoning capabilities. We present an extensive series of controlled experiments where pre-trained language models are exposed to data that have undergone specific corruption transformations. The transformations involve removing instances of specific word classes and often lead to non-sensical sentences. Our results show that performance remains high for most GLUE tasks when the models are fine-tuned or tested on corrupted data, suggesting that the models leverage other cues for prediction even in non-sensical contexts. Our proposed data transformations can be used as a diagnostic tool for assessing the extent to which a specific dataset constitutes a proper testbed for evaluating models' language understanding capabilities.

0
0
下载
预览
参考链接
Top