Models of narrative schema knowledge have proven useful for a range of event-related tasks, but they typically do not capture the temporal relationships between events. We propose a single model that addresses both temporal ordering, sorting given events into the order they occurred, and event infilling, predicting new events which fit into an existing temporally-ordered sequence. We use a BART-based conditional generation model that can capture both temporality and common event co-occurrence, meaning it can be flexibly applied to different tasks in this space. Our model is trained as a denoising autoencoder: we take temporally-ordered event sequences, shuffle them, delete some events, and then attempt to recover the original event sequence. This task teaches the model to make inferences given incomplete knowledge about the events in an underlying scenario. On the temporal ordering task, we show that our model is able to unscramble event sequences from existing datasets without access to explicitly labeled temporal training data, outperforming both a BERT-based pairwise model and a BERT-based pointer network. On event infilling, human evaluation shows that our model is able to generate events that fit better temporally into the input events when compared to GPT-2 story completion models.
翻译:叙事系统知识模型已被证明对一系列与事件相关的任务有用,但通常无法捕捉事件之间的时间关系。 我们提出了一个单一模型,既处理时间顺序,将特定事件按其发生的顺序排序,又处理事件填充,预测适合现有时间顺序序列的新事件。 我们使用一个基于 BART 的有条件生成模型,既能反映时间性和常见事件共发,又能反映时间性和常见事件共发,这意味着可以灵活地应用到这个空间的不同任务中。 我们的模型被训练成一个解密自动编码器:我们采用时间顺序排列的事件序列,打乱它们,删除一些事件,然后试图恢复原始事件序列。 这项任务教导模型,在对一个基本假设情景中的事件了解不完全的情况下作出推断。 在时间排序任务中,我们显示我们的模型能够从现有数据集中解析事件序列,而没有获得明确标明的时间培训数据。 我们的模型比基于 BERT 的配对模型和基于 BERT 的点网络都高:我们采用时间顺序排列的事件序列序列序列序列, 洗刷它们,删除它们,删除它们,删除某些事件,然后试图恢复原始事件序列。 在比较时,人类的模型中, 比较模型显示我们的模型能够改进的模型。