包括微软、CMU、Stanford在内的顶级人工智能专家和学者们正在研究更复杂的任务:让机器像人类一样阅读文本,进而根据对该文本的理解来回答问题。这种阅读理解就像是让计算机来做我们高考英语的阅读理解题。

知识荟萃

机器阅读理解(Reading comprehension)专知荟萃

入门学习

  1. 深度学习解决机器阅读理解任务的研究进展 张俊林
  2. 从短句到长文,计算机如何学习阅读理解 微软亚洲研究院
  3. 基于深度学习的阅读理解 冯岩松
  4. SQuAD综述
  5. 教机器学习阅读 张俊
  6. 解读DeepMind的论文“教会机器阅读和理解”
  7. 机器阅读理解中文章和问题的深度学习表示方法

综述

  1. Emergent Logical Structure in Vector Representations of Neural Readers
  2. 机器阅读理解任务综述 林鸿宇 韩先培

进阶论文

  1. Teaching Machines to Read and Comprehend
  2. Learning to Ask: Neural Question Generation for Reading Comprehension
  3. Attention-over-Attention Neural Networks for Reading Comprehension
  4. R-NET: MACHINE READING COMPREHENSION WITH SELF-MATCHING NETWORKS
  5. Mnemonic Reader for Machine Comprehension
  6. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
  7. S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension
  8. RACE: Large-scale ReAding Comprehension Dataset From Examinations
  9. Adversarial Examples for Evaluating Reading Comprehension Systems
  10. Machine comprehension using match-lstm and answer pointer
  11. Multi-perspective context matching for machine comprehension
  12. Reasonet: Learning to stop reading in machine comprehension
  13. Learning recurrent span representations for extractive question answering
  14. End-to-end answer chunk extraction and ranking for reading comprehension
  15. Words or characters? fine-grained gating for reading comprehension
  16. Reading Wikipedia to Answer Open-Domain Questions
  17. An analysis of prerequisite skills for reading comprehension
  18. A Comparative Study of Word Embeddings for Reading Comprehension

Datasets

  1. MCTest
  2. bAbI
  3. WikiQA
  4. SNLI
  5. Children's Book Test
  6. BookTest
  7. CNN / Daily Mail
  8. Who Did What
  9. NewsQA
  10. SQuAD
  11. LAMBADA
  12. MS MARCO
  13. WikiMovies
  14. WikiReading

Code

  1. CNN/Daily Mail Reading Comprehension Task
  2. TriviaQA
  3. Attentive Reader
  4. DrQA

领域专家

  1.  Percy Liang
  2. 刘挺
  3. Jason Weston

初步版本,水平有限,有错误或者不完善的地方,欢迎大家提建议和补充,会一直保持更新,本文为专知内容组原创内容,未经允许不得转载,如需转载请发送邮件至fangquanyi@gmail.com 或 联系微信专知小助手(Rancho_Fang)

敬请关注http://www.zhuanzhi.ai 和关注专知公众号,获取第一手AI相关知识

VIP内容

本文从互补学习系统理论的角度提出了一种新的阅读理解双向认知知识框架(BCKF)。它旨在模拟大脑中两种回答问题的思维方式,包括逆向思维和惯性思维。为了验证该框架的有效性,我们设计了一个相应的双向认知思维网络(BCTN),对文章进行编码,生成一个给定答案(问题)的问题(答案),并对双向知识进行解耦。该模型具有逆向推理的能力,有助于惯性思维产生更准确的答案。在DuReader数据集中观察到有效地改善,证实了我们的假设,即双向知识有助于QA任务。同时,这个新颖的框架也展示了机器阅读理解和认知科学的一个有趣的视角。

成为VIP会员查看完整内容
0
6

最新论文

This paper introduces our systems for all three subtasks of SemEval-2021 Task 4: Reading Comprehension of Abstract Meaning. To help our model better represent and understand abstract concepts in natural language, we well-design many simple and effective approaches adapted to the backbone model (RoBERTa). Specifically, we formalize the subtasks into the multiple-choice question answering format and add special tokens to abstract concepts, then, the final prediction of question answering is considered as the result of subtasks. Additionally, we employ many finetuning tricks to improve the performance. Experimental results show that our approaches achieve significant performance compared with the baseline systems. Our approaches achieve eighth rank on subtask-1 and tenth rank on subtask-2.

0
1
下载
预览
Top