The impressive performance of GPT-3 using natural language prompts and in-context learning has inspired work on better fine-tuning of moderately-sized models under this paradigm. Following this line of work, we present a contrastive learning framework that clusters inputs from the same class for better generality of models trained with only limited examples. Specifically, we propose a supervised contrastive framework that clusters inputs from the same class under different augmented "views" and repel the ones from different classes. We create different "views" of an example by appending it with different language prompts and contextual demonstrations. Combining a contrastive loss with the standard masked language modeling (MLM) loss in prompt-based few-shot learners, the experimental results show that our method can improve over the state-of-the-art methods in a diverse set of 15 language tasks. Our framework makes minimal assumptions on the task or the base model, and can be applied to many recent methods with little modification. The code will be made available at: https://github.com/yiren-jian/LM-SupCon.
翻译:GPT-3使用自然语言的提示和内文学习的令人印象深刻的成绩激励了在这种范式下更好地微调中等规模模型的工作。根据这一类工作,我们提出了一个对比式学习框架,将同一类的投入集中起来,以便更普遍地使用仅以有限实例培训的模型。具体地说,我们提出了一个监督式对比性框架,将同一类的投入集中到不同的扩大“视图”之下,并击退不同类别的投入。我们用不同的语言提示和背景演示来创建不同的“视图”。在快速点播的少数学习者中,将一个对比性损失与标准隐蔽语言模型损失结合起来,实验结果显示,我们的方法可以改进15种语言任务中最先进的方法。我们的框架对任务或基本模型提出了最低限度的假设,并且可以稍加修改后应用于许多近期的方法。代码将在以下网址上公布:https://github.com/yren-jian/LM-SupCon。