Recent years have seen remarkable progress of text generation in different contexts, such as the most common setting of generating text from scratch, and the emerging paradigm of retrieval-and-rewriting. Text infilling, which fills missing text portions of a sentence or paragraph, is also of numerous use in real life, yet is under-explored. Previous work has focused on restricted settings by either assuming single word per missing portion or limiting to a single missing portion to the end of the text. This paper studies the general task of text infilling, where the input text can have an arbitrary number of portions to be filled, each of which may require an arbitrary unknown number of tokens. We study various approaches for the task, including a self-attention model with segment-aware position encoding and bidirectional context modeling. We create extensive supervised data by masking out text with varying strategies. Experiments show the self-attention model greatly outperforms others, creating a strong baseline for future research.
翻译:近年来,不同背景下的文本生成取得了显著进展,例如从零开始生成文本的最常见环境,以及正在形成的检索和重写模式。填充文本(填充一个句子或段落缺失的文本部分)在现实生活中也有许多用途,但探索不足。以往的工作侧重于限制设置,要么假设每个缺失部分有一个单字,要么限制到文本结尾一个缺失部分。本文研究了文本填充的一般任务,输入文本可以任意填充若干部分,其中每个部分可能需要任意未知的标牌。我们研究了各种任务方法,包括一个带有分段认知位置编码和双向背景建模的自我注意模式。我们通过以不同战略遮盖文本,创建了广泛的监督数据。实验显示自我注意模式大大优于其他模式,为未来研究建立强有力的基线。