State-of-the-art NLP models can adopt shallow heuristics that limit their generalization capability (McCoy et al., 2019). Such heuristics include lexical overlap with the training set in Named-Entity Recognition (Taill\'e et al., 2020) and Event or Type heuristics in Relation Extraction (Rosenman et al., 2020). In the more realistic end-to-end RE setting, we can expect yet another heuristic: the mere retention of training relation triples. In this paper, we propose several experiments confirming that retention of known facts is a key factor of performance on standard benchmarks. Furthermore, one experiment suggests that a pipeline model able to use intermediate type representations is less prone to over-rely on retention.
翻译:最先进的NLP模式可以采用限制其一般化能力的浅线性模型(McCoy等人,2019年),这种超线性模型包括与命名-实体确认(Taill\'e等人,2020年)和关系采掘中事件或类型超线性研究(Rosenman等人,2020年)。在更现实的端对端RE设置中,我们可以期待另一个超线性模型:仅仅保留培训关系三倍。在本文中,我们提出若干实验,确认保留已知事实是标准基准业绩的一个关键要素。此外,一项实验表明,能够使用中间型表示的管道模型在保留方面不太容易过于过分。