Using different sources of information to support automated extracting of relations between biomedical concepts contributes to the development of our understanding of biological systems. The primary comprehensive source of these relations is biomedical literature. Several relation extraction approaches have been proposed to identify relations between concepts in biomedical literature, namely, using neural networks algorithms. The use of multichannel architectures composed of multiple data representations, as in deep neural networks, is leading to state-of-the-art results. The right combination of data representations can eventually lead us to even higher evaluation scores in relation extraction tasks. Thus, biomedical ontologies play a fundamental role by providing semantic and ancestry information about an entity. The incorporation of biomedical ontologies has already been proved to enhance previous state-of-the-art results.

0
下载
关闭预览

相关内容

神经网络(Neural Networks)是世界上三个最古老的神经建模学会的档案期刊:国际神经网络学会(INNS)、欧洲神经网络学会(ENNS)和日本神经网络学会(JNNS)。神经网络提供了一个论坛,以发展和培育一个国际社会的学者和实践者感兴趣的所有方面的神经网络和相关方法的计算智能。神经网络欢迎高质量论文的提交,有助于全面的神经网络研究,从行为和大脑建模,学习算法,通过数学和计算分析,系统的工程和技术应用,大量使用神经网络的概念和技术。这一独特而广泛的范围促进了生物和技术研究之间的思想交流,并有助于促进对生物启发的计算智能感兴趣的跨学科社区的发展。因此,神经网络编委会代表的专家领域包括心理学,神经生物学,计算机科学,工程,数学,物理。该杂志发表文章、信件和评论以及给编辑的信件、社论、时事、软件调查和专利信息。文章发表在五个部分之一:认知科学,神经科学,学习系统,数学和计算分析、工程和应用。 官网地址:http://dblp.uni-trier.de/db/journals/nn/

In recent years, neural networks have shown impressive performance gains on long-standing AI problems, and in particular, answering queries from natural language text. These advances raise the question of whether they can be extended to a point where we can relax the fundamental assumption of database management, namely, that our data is represented as fields of a pre-defined schema. This paper presents a first step in answering that question. We describe NeuralDB, a database system with no pre-defined schema, in which updates and queries are given in natural language. We develop query processing techniques that build on the primitives offered by the state of the art Natural Language Processing methods. We begin by demonstrating that at the core, recent NLP transformers, powered by pre-trained language models, can answer select-project-join queries if they are given the exact set of relevant facts. However, they cannot scale to non-trivial databases and cannot perform aggregation queries. Based on these findings, we describe a NeuralDB architecture that runs multiple Neural SPJ operators in parallel, each with a set of database sentences that can produce one of the answers to the query. The result of these operators is fed to an aggregation operator if needed. We describe an algorithm that learns how to create the appropriate sets of facts to be fed into each of the Neural SPJ operators. Importantly, this algorithm can be trained by the Neural SPJ operator itself. We experimentally validate the accuracy of NeuralDB and its components, showing that we can answer queries over thousands of sentences with very high accuracy.

0
0
下载
预览

Understanding natural language involves recognizing how multiple event mentions structurally and temporally interact with each other. In this process, one can induce event complexes that organize multi-granular events with temporal order and membership relations interweaving among them. Due to the lack of jointly labeled data for these relational phenomena and the restriction on the structures they articulate, we propose a joint constrained learning framework for modeling event-event relations. Specifically, the framework enforces logical constraints within and across multiple temporal and subevent relations by converting these constraints into differentiable learning objectives. We show that our joint constrained learning approach effectively compensates for the lack of jointly labeled data, and outperforms SOTA methods on benchmarks for both temporal relation extraction and event hierarchy construction, replacing a commonly used but more expensive global inference process. We also present a promising case study showing the effectiveness of our approach in inducing event complexes on an external corpus.

0
0
下载
预览

Floods are one of the major climate-related disasters, leading to substantial economic loss and social safety issue. However, the confidence in predicting changes in fluvial floods remains low due to limited evidence and complex causes of regional climate change. The recent development in machine learning techniques has the potential to improve traditional hydrological models by using monitoring data. Although Recurrent Neural Networks (RNN) perform remarkably with multivariate time series data, these models are blinded to the underlying mechanisms represented in a process-based model for flood prediction. While both process-based models and deep learning networks have their strength, understanding the fundamental mechanisms intrinsic to geo-spatiotemporal information is crucial to improve the prediction accuracy of flood occurrence. This paper demonstrates a neural network architecture (HydroDeep) that couples a process-based hydro-ecological model with a combination of Deep Convolutional Neural Network (CNN) and Long Short-Term Memory (LSTM) Network to build a hybrid baseline model. HydroDeep outperforms the performance of both the independent networks by 4.8% and 31.8% respectively in Nash-Sutcliffe efficiency. A trained HydroDeep can transfer its knowledge and can learn the Geo-spatiotemporal features of any new region in minimal training iterations.

0
0
下载
预览

In this paper, we present a novel method named RECON, that automatically identifies relations in a sentence (sentential relation extraction) and aligns to a knowledge graph (KG). RECON uses a graph neural network to learn representations of both the sentence as well as facts stored in a KG, improving the overall extraction quality. These facts, including entity attributes (label, alias, description, instance-of) and factual triples, have not been collectively used in the state of the art methods. We evaluate the effect of various forms of representing the KG context on the performance of RECON. The empirical evaluation on two standard relation extraction datasets shows that RECON significantly outperforms all state of the art methods on NYT Freebase and Wikidata datasets. RECON reports 87.23 F1 score (Vs 82.29 baseline) on Wikidata dataset whereas on NYT Freebase, reported values are 87.5(P@10) and 74.1(P@30) compared to the previous baseline scores of 81.3(P@10) and 63.1(P@30).

0
3
下载
预览

Entity and relation extraction is the necessary step in structuring medical text. However, the feature extraction ability of the bidirectional long short term memory network in the existing model does not achieve the best effect. At the same time, the language model has achieved excellent results in more and more natural language processing tasks. In this paper, we present a focused attention model for the joint entity and relation extraction task. Our model integrates well-known BERT language model into joint learning through dynamic range attention mechanism, thus improving the feature representation ability of shared parameter layer. Experimental results on coronary angiography texts collected from Shuguang Hospital show that the F1-score of named entity recognition and relation classification tasks reach 96.89% and 88.51%, which are better than state-of-the-art methods 1.65% and 1.22%, respectively.

0
3
下载
预览

We introduce the Neural State Machine, seeking to bridge the gap between the neural and symbolic views of AI and integrate their complementary strengths for the task of visual reasoning. Given an image, we first predict a probabilistic graph that represents its underlying semantics and serves as a structured world model. Then, we perform sequential reasoning over the graph, iteratively traversing its nodes to answer a given question or draw a new inference. In contrast to most neural architectures that are designed to closely interact with the raw sensory data, our model operates instead in an abstract latent space, by transforming both the visual and linguistic modalities into semantic concept-based representations, thereby achieving enhanced transparency and modularity. We evaluate our model on VQA-CP and GQA, two recent VQA datasets that involve compositionality, multi-step inference and diverse reasoning skills, achieving state-of-the-art results in both cases. We provide further experiments that illustrate the model's strong generalization capacity across multiple dimensions, including novel compositions of concepts, changes in the answer distribution, and unseen linguistic structures, demonstrating the qualities and efficacy of our approach.

0
4
下载
预览

In relation extraction for knowledge-based question answering, searching from one entity to another entity via a single relation is called "one hop". In related work, an exhaustive search from all one-hop relations, two-hop relations, and so on to the max-hop relations in the knowledge graph is necessary but expensive. Therefore, the number of hops is generally restricted to two or three. In this paper, we propose UHop, an unrestricted-hop framework which relaxes this restriction by use of a transition-based search framework to replace the relation-chain-based search one. We conduct experiments on conventional 1- and 2-hop questions as well as lengthy questions, including datasets such as WebQSP, PathQuestion, and Grid World. Results show that the proposed framework enables the ability to halt, works well with state-of-the-art models, achieves competitive performance without exhaustive searches, and opens the performance gap for long relation paths.

0
3
下载
预览

We report an evaluation of the effectiveness of the existing knowledge base embedding models for relation prediction and for relation extraction on a wide range of benchmarks. We also describe a new benchmark, which is much larger and complex than previous ones, which we introduce to help validate the effectiveness of both tasks. The results demonstrate that knowledge base embedding models are generally effective for relation prediction but unable to give improvements for the state-of-art neural relation extraction model with the existing strategies, while pointing limitations of existing methods.

0
8
下载
预览

We address the problem of learning vector representations for entities and relations in Knowledge Graphs (KGs) for Knowledge Base Completion (KBC). This problem has received significant attention in the past few years and multiple methods have been proposed. Most of the existing methods in the literature use a predefined characteristic scoring function for evaluating the correctness of KG triples. These scoring functions distinguish correct triples (high score) from incorrect ones (low score). However, their performance vary across different datasets. In this work, we demonstrate that a simple neural network based score function can consistently achieve near start-of-the-art performance on multiple datasets. We also quantitatively demonstrate biases in standard benchmark datasets, and highlight the need to perform evaluation spanning various datasets.

0
6
下载
预览

Adverse reaction caused by drugs is a potentially dangerous problem which may lead to mortality and morbidity in patients. Adverse Drug Event (ADE) extraction is a significant problem in biomedical research. We model ADE extraction as a Question-Answering problem and take inspiration from Machine Reading Comprehension (MRC) literature, to design our model. Our objective in designing such a model, is to exploit the local linguistic context in clinical text and enable intra-sequence interaction, in order to jointly learn to classify drug and disease entities, and to extract adverse reactions caused by a given drug. Our model makes use of a self-attention mechanism to facilitate intra-sequence interaction in a text sequence. This enables us to visualize and understand how the network makes use of the local and wider context for classification.

0
3
下载
预览
小贴士
相关论文
James Thorne,Majid Yazdani,Marzieh Saeidi,Fabrizio Silvestri,Sebastian Riedel,Alon Halevy
0+阅读 · 10月14日
Haoyu Wang,Muhao Chen,Hongming Zhang,Dan Roth
0+阅读 · 10月13日
Aishwarya Sarkar,Jien Zhang,Chaoqun Lu,Ali Jannesari
0+阅读 · 10月9日
RECON: Relation Extraction using Knowledge Graph Context in a Graph Neural Network
Anson Bastos,Abhishek Nadgeri,Kuldeep Singh,Isaiah Onando Mulang',Saeedeh Shekarpour,Johannes Hoffart
3+阅读 · 9月18日
Fine-tuning BERT for Joint Entity and Relation Extraction in Chinese Medical Text
Kui Xue,Yangming Zhou,Zhiyuan Ma,Tong Ruan,Huanhuan Zhang,Ping He
3+阅读 · 2019年8月21日
Learning by Abstraction: The Neural State Machine
Drew A. Hudson,Christopher D. Manning
4+阅读 · 2019年7月11日
Zi-Yuan Chen,Chih-Hung Chang,Yi-Pei Chen,Jijnasa Nayak,Lun-Wei Ku
3+阅读 · 2019年4月2日
Srinivas Ravishankar, Chandrahas,Partha Pratim Talukdar
6+阅读 · 2018年1月8日
Suriyadeepan Ramamoorthy,Selvakumar Murugan
3+阅读 · 2018年1月2日
相关VIP内容
【SIGGRAPH2019】TensorFlow 2.0深度学习计算机图形学应用
专知会员服务
5+阅读 · 2019年10月9日
相关资讯
Transferring Knowledge across Learning Processes
CreateAMind
5+阅读 · 2019年5月18日
论文浅尝 | Global Relation Embedding for Relation Extraction
开放知识图谱
10+阅读 · 2019年3月3日
RL 真经
CreateAMind
3+阅读 · 2018年12月28日
disentangled-representation-papers
CreateAMind
19+阅读 · 2018年9月12日
论文浅尝 | Improved Neural Relation Detection for KBQA
开放知识图谱
8+阅读 · 2018年1月21日
论文浅尝 | Distant Supervision for Relation Extraction
开放知识图谱
4+阅读 · 2017年12月25日
计算机视觉近一年进展综述
机器学习研究会
6+阅读 · 2017年11月25日
Top