命名实体识别(NER)(也称为实体标识,实体组块和实体提取)是信息抽取的子任务,旨在将非结构化文本中提到的命名实体定位和分类为预定义类别,例如人员姓名、地名、机构名、专有名词等。

知识荟萃

命名实体识别 Named Entity Recognition 专知荟萃

综述

  1. Jing Li, Aixin Sun,Jianglei Han, Chenliang Li

  2. A Review of Named Entity Recognition (NER) Using Automatic Summarization of Resumes

模型算法

  1. LSTM + CRF中的NCRF++算法: Design Challenges and Misconceptions in Neural Sequence Labeling.COLLING 2018.

  2. CNN+CRF:

  3. BERT+(LSTM)+CRF:

入门学习

  1. NLP之CRF应用篇(序列标注任务)( CRF++的详细解析、Bi-LSTM+CRF中CRF层的详细解析、Bi-LSTM后加CRF的原因、CRF和Bi-LSTM+CRF优化目标的区别) )

  2. Bilstm+CRF中的CRF详解

  3. Bilstm-CRF中的CRF层解析-2

  4. Bilstm-CRF中的CRF层解析-3

  5. CRF和LSTM模型在序列标注上的优劣?

  6. CRF和LSTM的比较

  7. 入门参考:命名实体识别(NER)的二三事

  8. 基础却不简单,命名实体识别的难点与现状

  9. 通俗理解BiLSTM-CRF命名实体识别模型中的CRF层

重要报告

Tutorial

​1.(pyToech)高级:制定动态决策和BI-LSTM CRF(Advanced: Making Dynamic Decisions and the Bi-LSTM CRF) - [https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html]

代码

​1.中文命名实体识别(包括多种模型:HMM,CRF,BiLSTM,BiLSTM+CRF的具体实现)

  - [https://github.com/luopeixiang/named_entity_recognition]

领域专家

1.华为-诺亚方舟 - 李航 []

2.美国伊利诺伊大学 - 韩家炜 [https://hanj.cs.illinois.edu/]

命名实体识别工具

  1. Stanford NER
  2. MALLET
  3. Hanlp
  4. NLTK
  5. spaCy
  6. Ohio State University Twitter NER

###相关数据集

  1. CCKS2017 开放的中文的电子病例测评相关的数据。 评测任务一:

  2. CCKS2018 开放的音乐领域的实体识别任务。

评测任务:

  - [https://biendata.com/competition/CCKS2018_2/]
  1. NLPCC2018 开放的任务型对话系统中的口语理解评测。

CoNLL 2003

https://www.clips.uantwerpen.be/conll2003/ner/

进阶论文

1999

2005

2006

2008

2009

2010

2011

2012

2013

2014

2015

2016

2017

2018

2019

2020

VIP内容

论文针对现有跨语言命名实体识别方法主要使用源语言数据和翻译数据的局限性,提出充分利用目标语言的大规模无标签数据提升迁移性能。作者基于半监督学习和强化学习方法,提出RIKD模型,首先通过在目标语言无标签数据上迭代知识蒸馏,不断获得更高效的学生模型。其次,为了降低蒸馏过程中教师模型的推理错误和低质量数据带来的噪声,设计了一个基于强化学习的样本选择器,动态选择信息量更大的样本进行蒸馏。实验结果表明,RIKD在基准数据集和内部数据集上显著优于现有最优模型。

https://www.zhuanzhi.ai/paper/18a3b87ee49058589b9acb0098a3ab42

成为VIP会员查看完整内容
0
13

最新论文

Building machine learning prediction models for a specific NLP task requires sufficient training data, which can be difficult to obtain for less-resourced languages. Cross-lingual embeddings map word embeddings from a less-resourced language to a resource-rich language so that a prediction model trained on data from the resource-rich language can also be used in the less-resourced language. To produce cross-lingual mappings of recent contextual embeddings, anchor points between the embedding spaces have to be words in the same context. We address this issue with a novel method for creating cross-lingual contextual alignment datasets. Based on that, we propose several cross-lingual mapping methods for ELMo embeddings. The proposed linear mapping methods use existing Vecmap and MUSE alignments on contextual ELMo embeddings. Novel nonlinear ELMoGAN mapping methods are based on GANs and do not assume isomorphic embedding spaces. We evaluate the proposed mapping methods on nine languages, using four downstream tasks: named entity recognition (NER), dependency parsing (DP), terminology alignment, and sentiment analysis. The ELMoGAN methods perform very well on the NER and terminology alignment tasks, with a lower cross-lingual loss for NER compared to the direct training on some languages. In DP and sentiment analysis, linear contextual alignment variants are more successful.

0
0
下载
预览
Top