包括微软、CMU、Stanford在内的顶级人工智能专家和学者们正在研究更复杂的任务:让机器像人类一样阅读文本,进而根据对该文本的理解来回答问题。这种阅读理解就像是让计算机来做我们高考英语的阅读理解题。

知识荟萃

机器阅读理解(Reading comprehension)专知荟萃

入门学习

  1. 深度学习解决机器阅读理解任务的研究进展 张俊林
  2. 从短句到长文,计算机如何学习阅读理解 微软亚洲研究院
  3. 基于深度学习的阅读理解 冯岩松
  4. SQuAD综述
  5. 教机器学习阅读 张俊
  6. 解读DeepMind的论文“教会机器阅读和理解”
  7. 机器阅读理解中文章和问题的深度学习表示方法

综述

  1. Emergent Logical Structure in Vector Representations of Neural Readers
  2. 机器阅读理解任务综述 林鸿宇 韩先培

进阶论文

  1. Teaching Machines to Read and Comprehend
  2. Learning to Ask: Neural Question Generation for Reading Comprehension
  3. Attention-over-Attention Neural Networks for Reading Comprehension
  4. R-NET: MACHINE READING COMPREHENSION WITH SELF-MATCHING NETWORKS
  5. Mnemonic Reader for Machine Comprehension
  6. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
  7. S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension
  8. RACE: Large-scale ReAding Comprehension Dataset From Examinations
  9. Adversarial Examples for Evaluating Reading Comprehension Systems
  10. Machine comprehension using match-lstm and answer pointer
  11. Multi-perspective context matching for machine comprehension
  12. Reasonet: Learning to stop reading in machine comprehension
  13. Learning recurrent span representations for extractive question answering
  14. End-to-end answer chunk extraction and ranking for reading comprehension
  15. Words or characters? fine-grained gating for reading comprehension
  16. Reading Wikipedia to Answer Open-Domain Questions
  17. An analysis of prerequisite skills for reading comprehension
  18. A Comparative Study of Word Embeddings for Reading Comprehension

Datasets

  1. MCTest
  2. bAbI
  3. WikiQA
  4. SNLI
  5. Children's Book Test
  6. BookTest
  7. CNN / Daily Mail
  8. Who Did What
  9. NewsQA
  10. SQuAD
  11. LAMBADA
  12. MS MARCO
  13. WikiMovies
  14. WikiReading

Code

  1. CNN/Daily Mail Reading Comprehension Task
  2. TriviaQA
  3. Attentive Reader
  4. DrQA

领域专家

  1.  Percy Liang
  2. 刘挺
  3. Jason Weston

初步版本,水平有限,有错误或者不完善的地方,欢迎大家提建议和补充,会一直保持更新,本文为专知内容组原创内容,未经允许不得转载,如需转载请发送邮件至fangquanyi@gmail.com 或 联系微信专知小助手(Rancho_Fang)

敬请关注http://www.zhuanzhi.ai 和关注专知公众号,获取第一手AI相关知识

VIP内容

机器阅读理解(MRC)旨在教会机器阅读和理解人类语言,这是自然语言处理(NLP)的长期目标。随着深度神经网络的爆发和上下文语言模型(CLMs)的发展,MRC的研究经历了两个重大突破。MRC和CLM作为一种现象,对NLP社区产生了巨大的影响。本文从以下几个方面对MRC进行了全面的比较研究:1)MRC和CLM的起源和发展,特别是CLM的作用;2) MRC和CLM对NLP社区的影响;3) MRC的定义、数据集和评价;(4)基于人类认知过程视角的两阶段译码解算体系结构视角下的通用MRC体系结构与技术方法;5)以往研究的亮点、新出现的课题以及我们的实证分析,其中我们特别关注了在MRC研究的不同时期的作用。针对这些主题,我们提出了一个全视图分类和新的分类法。我们的主要观点是:1)MRC促进了从语言处理到理解的进程;2) MRC系统的快速改进得益于CLMs的发展;3) MRC的主题逐渐从浅层的文本匹配转向认知推理。

https://www.zhuanzhi.ai/paper/4a9e5f961d514baf95a9ab3cae550262

成为VIP会员查看完整内容
0
20

最新内容

When making an online purchase, it becomes important for the customer to read the product reviews carefully and make a decision based on that. However, reviews can be lengthy, may contain repeated, or sometimes irrelevant information that does not help in decision making. In this paper, we introduce MRCBert, a novel unsupervised method to generate summaries from product reviews. We leverage Machine Reading Comprehension, i.e. MRC, approach to extract relevant opinions and generate both rating-wise and aspect-wise summaries from reviews. Through MRCBert we show that we can obtain reasonable performance using existing models and transfer learning, which can be useful for learning under limited or low resource scenarios. We demonstrated our results on reviews of a product from the Electronics category in the Amazon Reviews dataset. Our approach is unsupervised as it does not require any domain-specific dataset, such as the product review dataset, for training or fine-tuning. Instead, we have used SQuAD v1.1 dataset only to fine-tune BERT for the MRC task. Since MRCBert does not require a task-specific dataset, it can be easily adapted and used in other domains.

0
0
下载
预览

最新论文

When making an online purchase, it becomes important for the customer to read the product reviews carefully and make a decision based on that. However, reviews can be lengthy, may contain repeated, or sometimes irrelevant information that does not help in decision making. In this paper, we introduce MRCBert, a novel unsupervised method to generate summaries from product reviews. We leverage Machine Reading Comprehension, i.e. MRC, approach to extract relevant opinions and generate both rating-wise and aspect-wise summaries from reviews. Through MRCBert we show that we can obtain reasonable performance using existing models and transfer learning, which can be useful for learning under limited or low resource scenarios. We demonstrated our results on reviews of a product from the Electronics category in the Amazon Reviews dataset. Our approach is unsupervised as it does not require any domain-specific dataset, such as the product review dataset, for training or fine-tuning. Instead, we have used SQuAD v1.1 dataset only to fine-tune BERT for the MRC task. Since MRCBert does not require a task-specific dataset, it can be easily adapted and used in other domains.

0
0
下载
预览
Top