While different language models are ubiquitous in NLP, it is hard to contrast their outputs and identify which contexts one can handle better than the other. To address this question, we introduce LMdiff, a tool that visually compares probability distributions of two models that differ, e.g., through finetuning, distillation, or simply training with different parameter sizes. LMdiff allows the generation of hypotheses about model behavior by investigating text instances token by token and further assists in choosing these interesting text instances by identifying the most interesting phrases from large corpora. We showcase the applicability of LMdiff for hypothesis generation across multiple case studies. A demo is available at http://lmdiff.net .
翻译:虽然不同的语言模式在荷兰语言平台中普遍存在,但很难对比它们的输出结果,并查明哪些环境可以比其他模式更好地处理。为了解决这一问题,我们引入了LMdiff,这是一个对两种不同模式的概率分布进行视觉比较的工具,例如,通过微调、蒸馏或简单的不同参数大小的培训。LMdiff通过象征性地调查文本实例来生成模型行为的假设,并通过从大公司中找出最有趣的词句,进一步帮助选择这些有趣的文本实例。我们展示了LMdiff在多个案例研究中生成假设时的适用性。演示可在http://lmdiff.net上查阅。