Prompt learning methods adapt pre-trained language models to downstream applications by using a task-specific prompt together with the input. Most of the current work on prompt learning in text generation relies on a shared dataset-level prompt for all examples in the dataset. We extend this approach and propose a dynamic method, Control Prefixes, which allows for the inclusion of conditional input-dependent information in each prompt. Control Prefixes is at the intersection of prompt learning and controlled generation, empowering the model to have finer-grained control during text generation. The method incorporates attribute-level learnable representations into different layers of a pre-trained transformer, allowing for the generated text to be guided in a particular direction. We provide a systematic evaluation of the technique and apply it to five datasets from the GEM benchmark for natural language generation (NLG). We present state-of-the-art results on several data-to-text datasets, including WebNLG.
翻译:快速学习方法通过使用特定任务快速和输入,使培训前的语言模式适应下游应用。目前关于迅速生成文本的工作大多依赖于数据集中所有示例的共享数据集水平。我们扩展了这一方法并提出动态方法,即控制前缀,允许将有条件的、依赖输入的信息纳入每个提示。控制前缀处于迅速学习和控制生成的交叉点,使模型在文本生成过程中能够有细微的区分控制。该方法将属性层面的可学习表达纳入培训前变压器的不同层,使生成的文本在特定方向上得到指导。我们系统地评估该技术,并将其应用于来自自然语言生成的GEM基准(NLG)的五套数据集。我们介绍了包括WebNLG在内的若干数据到文本数据集的最新结果。