The overwhelming amount of biomedical scientific texts calls for the development of effective language models able to tackle a wide range of biomedical natural language processing (NLP) tasks. The most recent dominant approaches are domain-specific models, initialized with general-domain textual data and then trained on a variety of scientific corpora. However, it has been observed that for specialized domains in which large corpora exist, training a model from scratch with just in-domain knowledge may yield better results. Moreover, the increasing focus on the compute costs for pre-training recently led to the design of more efficient architectures, such as ELECTRA. In this paper, we propose a pre-trained domain-specific language model, called ELECTRAMed, suited for the biomedical field. The novel approach inherits the learning framework of the general-domain ELECTRA architecture, as well as its computational advantages. Experiments performed on benchmark datasets for several biomedical NLP tasks support the usefulness of ELECTRAMed, which sets the novel state-of-the-art result on the BC5CDR corpus for named entity recognition, and provides the best outcome in 2 over the 5 runs of the 7th BioASQ-factoid Challange for the question answering task.
翻译:绝大多数生物医学科学文本都要求开发有效的语言模型,能够解决生物医学自然语言处理(NLPP)等广泛任务。最新的主要方法为特定领域的模型,先用普通文本数据,然后进行各种科学公司的培训。然而,人们注意到,对于大型公司存在的专门领域,从零开始培训模型,仅凭日常知识即可产生更好的结果。此外,由于日益注重培训前的计算成本,最近设计了效率更高的结构,如ELECTRA。在本文件中,我们提出了一个预先培训的具体领域语言模型,称为ELECTRAMed,适合生物医学领域。新颖的方法继承了通用ELECTRA结构的学习框架及其计算优势。对若干生物医学自然科学科学任务的基准数据集进行的实验,支持了ELECTRAMed的效用,该模型为命名实体识别的BC5CDRPROp设定了新型的艺术成果,并为BC5C-CRDRPS提供了第2号任务中的最佳结果。