We present an approach to minimally supervised relation extraction that combines the benefits of learned representations and structured learning, and accurately predicts sentence-level relation mentions given only proposition-level supervision from a KB. By explicitly reasoning about missing data during learning, our approach enables large-scale training of 1D convolutional neural networks while mitigating the issue of label noise inherent in distant supervision. Our approach achieves state-of-the-art results on minimally supervised sentential relation extraction, outperforming a number of baselines, including a competitive approach that uses the attention layer of a purely neural model.
翻译:我们提出了一个最小监督关系提取方法,将学习的表述和结构化学习的好处结合起来,并准确地预测了仅从KB得到的建议层面监督的判刑级关系。 通过明确解释学习期间缺少的数据,我们的方法能够对1D进化神经网络进行大规模培训,同时减轻远程监督所固有的标签噪音问题。 我们的方法在最低监督的感性关系提取方面取得了最先进的成果,超过了一些基线,包括使用纯神经模型关注层的竞争方法。