In this paper we present OSCAR (Ontology-based Semantic Composition Augmented Regularization), a method for injecting task-agnostic knowledge from an Ontology or knowledge graph into a neural network during pretraining. We evaluated the impact of including OSCAR when pretraining BERT with Wikipedia articles by measuring the performance when fine-tuning on two question answering tasks involving world knowledge and causal reasoning and one requiring domain (healthcare) knowledge and obtained 33:3%, 18:6%, and 4% improved accuracy compared to pretraining BERT without OSCAR and obtaining new state-of-the-art results on two of the tasks.
翻译:在本文中,我们介绍OSCAR(基于本科的语义构成增强规范化),这是在培训前将本科或知识图中的任务不可知性知识注入神经网络的一种方法,我们评估了在用维基百科文章对BERT进行预培训时将OSCAR纳入神经网络的影响,在对涉及世界知识和因果推理的两个问题进行微调、需要领域(保健)知识并获得33.3%、18.6%和4%的准确性,而没有进行本科或知识图的培训前BERT和在两项任务中获得新的最新结果时,我们衡量了业绩。