Text compression has diverse applications such as Summarization, Reading Comprehension and Text Editing. However, almost all existing approaches require either hand-crafted features, syntactic labels or parallel data. Even for one that achieves this task in an unsupervised setting, its architecture necessitates a task-specific autoencoder. Moreover, these models only generate one compressed sentence for each source input, so that adapting to different style requirements (e.g. length) for the final output usually implies retraining the model from scratch. In this work, we propose a fully unsupervised model, Deleter, that is able to discover an "optimal deletion path" for an arbitrary sentence, where each intermediate sequence along the path is a coherent subsequence of the previous one. This approach relies exclusively on a pretrained bidirectional language model (BERT) to score each candidate deletion based on the average Perplexity of the resulting sentence and performs progressive greedy lookahead search to select the best deletion for each step. We apply Deleter to the task of extractive Sentence Compression, and found that our model is competitive with state-of-the-art supervised models trained on 1.02 million in-domain examples with similar compression ratio. Qualitative analysis, as well as automatic and human evaluations both verify that our model produces high-quality compression.
翻译:压缩文本有多种应用程序, 如 Summarization、 Read Conclution 和 Text 编辑等。 但是, 几乎所有现有方法都需要手工制作的功能、 合成标签或平行数据。 即使对于在不受监督的环境中完成这项任务的人来说, 其结构也需要一个任务特定的自动编码器。 此外, 这些模型只为每种源输入生成一个压缩的句子, 以便适应最终输出的不同样式要求( 如长度) 通常意味着从头到尾对模型进行再培训。 在这项工作中, 我们提议一个完全不受监督的模型Deterr, 它可以为任意的句子找到一个“ 最佳删除路径 ” 。 在这条路径上的每个中间序列都是前一个连贯的子序列。 这个方法完全依赖于一个经过预先培训的双向语言模型( BERT), 以便根据所生成的句子的平均易读取性进行评分数, 并进行逐渐贪婪的外观搜索, 以选择每一步骤的最佳删除。 我们建议将Detrar 应用一个完全不受监督的模型用于抽取性句子压缩的压缩, 并发现我们的模型具有竞争力, 和高质量的样本分析。