包括微软、CMU、Stanford在内的顶级人工智能专家和学者们正在研究更复杂的任务:让机器像人类一样阅读文本,进而根据对该文本的理解来回答问题。这种阅读理解就像是让计算机来做我们高考英语的阅读理解题。

知识荟萃

机器阅读理解(Reading comprehension)专知荟萃

入门学习

  1. 深度学习解决机器阅读理解任务的研究进展 张俊林
  2. 从短句到长文,计算机如何学习阅读理解 微软亚洲研究院
  3. 基于深度学习的阅读理解 冯岩松
  4. SQuAD综述
  5. 教机器学习阅读 张俊
  6. 解读DeepMind的论文“教会机器阅读和理解”
  7. 机器阅读理解中文章和问题的深度学习表示方法

综述

  1. Emergent Logical Structure in Vector Representations of Neural Readers
  2. 机器阅读理解任务综述 林鸿宇 韩先培

进阶论文

  1. Teaching Machines to Read and Comprehend
  2. Learning to Ask: Neural Question Generation for Reading Comprehension
  3. Attention-over-Attention Neural Networks for Reading Comprehension
  4. R-NET: MACHINE READING COMPREHENSION WITH SELF-MATCHING NETWORKS
  5. Mnemonic Reader for Machine Comprehension
  6. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
  7. S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension
  8. RACE: Large-scale ReAding Comprehension Dataset From Examinations
  9. Adversarial Examples for Evaluating Reading Comprehension Systems
  10. Machine comprehension using match-lstm and answer pointer
  11. Multi-perspective context matching for machine comprehension
  12. Reasonet: Learning to stop reading in machine comprehension
  13. Learning recurrent span representations for extractive question answering
  14. End-to-end answer chunk extraction and ranking for reading comprehension
  15. Words or characters? fine-grained gating for reading comprehension
  16. Reading Wikipedia to Answer Open-Domain Questions
  17. An analysis of prerequisite skills for reading comprehension
  18. A Comparative Study of Word Embeddings for Reading Comprehension

Datasets

  1. MCTest
  2. bAbI
  3. WikiQA
  4. SNLI
  5. Children's Book Test
  6. BookTest
  7. CNN / Daily Mail
  8. Who Did What
  9. NewsQA
  10. SQuAD
  11. LAMBADA
  12. MS MARCO
  13. WikiMovies
  14. WikiReading

Code

  1. CNN/Daily Mail Reading Comprehension Task
  2. TriviaQA
  3. Attentive Reader
  4. DrQA

领域专家

  1.  Percy Liang
  2. 刘挺
  3. Jason Weston

初步版本,水平有限,有错误或者不完善的地方,欢迎大家提建议和补充,会一直保持更新,本文为专知内容组原创内容,未经允许不得转载,如需转载请发送邮件至fangquanyi@gmail.com 或 联系微信专知小助手(Rancho_Fang)

敬请关注http://www.zhuanzhi.ai 和关注专知公众号,获取第一手AI相关知识

VIP内容

简介:

为了提供对机器阅读理解(MRC)中现有任务和模型的调查,本报告回顾:1)一些具有代表性的简单推理和复杂推理MRC任务的数据集收集和性能评估; 2)用于开发基于神经网络的MRC模型的体系结构设计,注意机制和提高性能的方法; 3)最近提出了一些转移学习方法,以将外部语料库中包含的文本样式知识合并到MRC模型的神经网络中; 4)最近提出的一些知识库编码方法,用于将外部知识库中包含的图形样式知识合并到MRC模型的神经网络中。 此外,根据已经取得的成就和仍然存在的不足,本报告还提出了一些尚待进一步研究的问题。

目录:

机器阅读理解(MRC)要求机器阅读上下文并根据其对上下文的理解回答一组相关问题。作为自然语言处理(NLP)中具有挑战性的领域,MRC吸引了人工智能界的关注。近年来,许多MRC任务已经建立,以促进该领域的探索和创新。这些任务在数据集收集和性能评估方面差异很大,但是在此报告中,根据所需推理过程的复杂性,它们大致分为两类:

  • 简单的MRC任务,其中每个上下文都是单个段落,例如单个虚构故事或报纸文章,因此所需的推理过程相对简单。
  • 复杂原因的MRC任务,其中每个上下文由多个步骤组成,例如多个书中的章节或网络文档,因此所需的推理过程相对复杂。
成为VIP会员查看完整内容
0
22

最新论文

Machine reading comprehension (MRC) has received considerable attention in natural language processing over the past few years. However, the conventional task design of MRC lacks the explainability beyond the model interpretation, i.e., the internal mechanics of the model cannot be explained in human terms. To this end, this position paper provides a theoretical basis for the design of MRC based on psychology and psychometrics and summarizes it in terms of the requirements for explainable MRC. We conclude that future datasets should (i) evaluate the capability of the model for constructing a coherent and grounded representation to understand context-dependent situations and (ii) ensure substantive validity by improving the question quality and by formulating a white-box task.

0
0
下载
预览
Top